query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
Hierarchical Reinforcement Learning is a promising approach to long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Treating the skills as fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient, as well as an unbiased latent-dependent baseline. We introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy simultaneously. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and are available at sites.google.com/view/hippo-rl. Reinforcement learning (RL) has made great progress in a variety of domains, from playing games such as Pong and Go BID5 BID20 to automating robotic locomotion BID11 BID20 b), dexterous manipulation (b; BID1, and perception BID7 a). Yet, most work in RL is still learning a new behavior from scratch when faced with a new problem. This is particularly inefficient when dealing with tasks that are hard to solve due to sparse rewards or long horizons, or when solving many related tasks. A promising technique to overcome this limitation is Hierarchical Reinforcement Learning (HRL) BID17 a). In this paradigm, policies have several modules of abstraction, so the reuse of a subset of the modules becomes easier. The most common case consists of temporal abstraction BID9 ), where a higher-level policy (manager) takes actions at a lower frequency, and its actions condition the behavior of some lower level skills or sub-policies. When transferring knowledge to a new task, most prior works fix the skills and train a new manager on top. Despite having a clear benefit in kick-starting the learning in the new task, having fixed skills can considerably cap the final performance on the new task (a). Little work has been done on adapting pre-trained sub-policies to be optimal for a new task. In this paper, we develop a new framework for adapting all levels of temporal hierarchies simultaneously. First, we derive an efficient approximated hierarchical policy gradient. Our key insight is that, under mild assumptions, the manager's decisions can be considered part of the observation from the perspective of the sub-policies. This decouples the gradient with respect to the manager and the sub-policies parameters and provides theoretical justification for a technique used in other prior works . Second, we introduce an unbiased sub-policy specific baseline for our hierarchical policy gradient. Our experiments reveal faster convergence, suggesting efficient gradient variance reduction. Then we introduce a more stable way of using this gradient, Hierarchical Proximal Policy Optimization (HiPPO). This helps us take more conservative steps in our policy space BID12, necessary in hierarchies because of the interdependence of each layer. Finally we also evaluate the benefit of varying the time-commitment to the sub-policies, and show it helps both in terms of final performance and zero-shot adaptation to similar tasks. We define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M = (S, A, P, r, ρ 0, γ, H), where S is a state set, A is an action set, P: S × A × S → R + is the transition probability distribution, γ ∈ is a discount factor, and H Figure 1. Temporal hierarchy studied in this paper. A latent code zt is sampled from the manager policy π θ h (zt|st) every p time-steps, using the current observation s kp. The actions at are sampled from the sub-policy π θ l (at|st, z kp) conditioned on the same latent code from timestep t = kp to timestep (k + 1)p − 1 the horizon. Our objective is to find a stochastic policy π θ that maximizes the expected discounted reward within the MDP, η(DISPLAYFORM0 We denote by τ = (s 0, a 0, ...,) the entire state-action trajectory, where s 0 ∼ ρ 0 (s 0), a t ∼ π θ (a t |s t), and s t+1 ∼ P(s t+1 |s t, a t). Prior works have been focused on learning a manager that combines provided sub-policies, but they do not further train the sub-policies when learning a new task. However, preventing the skills from learning in sub-optimal behavior in new tasks. This effect is exacerbated when the skills were learned in a task agnostic way or in a different environment. In this paper, we present a HRL method that learns all levels of abstraction in the hierarchical policy: the manager learns to make use of the low-level skills, while the skills are continuously adapted to attain maximum performance in the given task. We derive a policy gradient update for hierarchical policies that monotonically improves the performance. Furthermore, we demonstrate that our approach prevents sub-policy collapse behavior, when the manager ends up using just one skill, observed in previous approaches. When using a hierarchical policy, the intermediate decision taken by the higher level is not directly applied in the environment. This consideration makes it unclear how it should be incorporated into the Markovian framework of RL: should it be treated as an observed variable, like an action, or as a latent?In this section, we first prove that one framework is an approximation of the other under mild assumptions. Then, we derive an unbiased baseline for the HRL setup that reduces its variance. Thirdly, we introduce the notion of information bottleneck and trajectory compression, which proves critical for learning reusable skills. Finally, with these findings, we present our method, Hierarchical Proximal Policy Optimization (HiPPO), an on-policy algorithm for hierarchical policies that monotonically improves the RL objective, allowing learning at all levels of the policy and preventing sub-policy collapse. Policy gradient algorithms are based on the likelihood ratio trick BID21 to estimate the gradient of returns with respect to the policy parameters as DISPLAYFORM0 In the context of HRL, a hierarchical policy with a manager π θ h (z t |s t) selects every p time-steps one of n sub-policies to execute. These sub-policies, indexed by z ∈ [n], can be represented as a single conditional probability distribution over actions π θ l (a t |z t, s t). This allows us to also leverage skills learned with Stochastic Neural Networks (SNNs) (a). Under this framework, the probability of a trajectory τ = (s 0, a 0, s 1, . . ., s H) can be written as DISPLAYFORM1 The mixture action distribution, which presents itself as an additional summation over skills, prevents the additive factorization when taking the logarithm, as in Eq. 1. This can yield considerable numerical instabilities due to the product of the p sub-policy probabilities. For instance, in the case where all the skills are distinguishable all the sub-policies probabilities but one will have small values, ing in an exponentially small value. In the following Lemma, we derive an approximation of the policy gradient, whose error tends to zero as the skills become more diverse, and draw insights on the interplay of the manager actions. Lemma 1. If the skills are sufficiently differentiated, then the latent variable can be treated as part of the observation to compute the gradient of the trajectory probability. Let π θ h (z|s) and π θ l (a|s, z) be Lipschitz functions w.r.t. their parameters, and assume that 0 < π θ l (a|s, z j) < ∀j = kp, DISPLAYFORM2 Proof. See Appendix. Our assumption is that the skills are diverse. Namely, for each action there is just one sub-policy that gives it high probability. In this case, the latent variable can be treated as part of the observation to compute the gradient of the trajectory probability. Many algorithms to extract lowerlevel skills are based on promoting diversity among the skills (a;), so our assumption usually holds. We further empirically analyze this assumption in the appendix. The REINFORCE policy gradient estimate is known to have large variance. A very common approach to mitigate this issue without biasing the estimate is to subtract a baseline from the returns BID8. We show how, under the assumptions of Lemma 1, we can formulate an unbiased latent dependent baseline for the approximate gradient (Eq. 4).Lemma 2. For any functions b h: S → R and b l: S ×Z → R we have: DISPLAYFORM0 Proof. See Appendix. Now we apply Lemma 1 and Lemma 2 to Eq. 1. By using the corresponding value functions as the function baseline, the return can be replaced by the Advantage function BID11, and we obtain the following gradient expression: DISPLAYFORM1 This hierarchical policy gradient estimate has lower variance than without baselines, but using it for policy optimization through stochastic gradient descent still yields an unstable algorithm. In the next section, we further improve the stability and sample efficiency of the policy optimization by incorporating techniques from Proximal Policy Optimization BID12. Using an appropriate step size in policy space is critical for stable policy learning. We adopt the approach used by Proximal Policy Optimization (PPO) BID12, which modifies the cost function in a way that prevents large changes to the policy while only requiring the computation of the likelihood. Letting r h,kp (θ) = DISPLAYFORM0, and using the super-index clip to denote the clipped objective version, we obtain the new surrogate objective: DISPLAYFORM1 We call this algorithm Hierarchical Proximal Policy Optimization (HiPPO). Next, we introduce two critical additions: a switching of the time-commitment between skills, and an information bottleneck at the lower-level. Both are detailed in the following subsections. Most hierarchical methods either consider a fixed timecommitment to the lower level skills (a;), or implement the complex options framework BID9 BID2. In this work we propose an in-between, where the time-commitment to the skills is a random variable sampled from a fixed distribution Categorical(T min, T max) just before the manager takes a decision. This modification does not hinder final performance, and we show it improves zero-shot adaptation to a new task. This approach to sampling rollouts is detailed given in Algorithm 1 in the appendix. If we apply the above HiPPO algorithm in the general case, there is little incentive to either learn or maintain a diverse set of skills. We claim this can be addressed via two simple additions:• Let z only take a finite number of values • Provide a masked observation to the skills DISPLAYFORM0 The masking function f restricts the information about the task, such that a single skill cannot perform the full task. We use a hard-coded agent-space and problem-space split (; a) that hides all task-related information and only allows the sub-policies to see proprioceptive information. With this setup, all the missing information needed to perform the task must come from the sequence of latent codes passed to the skills. We can interpret this as a lossy compression, whereby the manager encodes the relevant problem information into log n bits sufficient for the next p timesteps. We design the experiments to answer the following questions: 1) How does HiPPO compare against a flat policy when learning from scratch? 2) Does it lead to more robust policies? 3) How well does it adapt already learned skills? and 4) Does our skill diversity assumption hold in practice? Figure 4. Benefit of adapting some given skills when the preferences of the environment are different from those of the environment where the skills were originally trained. In this section, we study the benefit of using the HiPPO algorithm instead of standard PPO on a flat policy BID12. The , shown in FIG0, demonstrate that training from scratch with HiPPO leads faster learning and better performance than flat PPO. Furthermore, the benefit of HiPPO does not just come from having temporally correlated exploration, as PPO with action repeat converges at a performance level well below our method. Finally, FIG1 shows the effectiveness of using the presented baseline. For this task, we take 6 pre-trained subpolicies encoded by a Stochastic Neural Network BID18 that were trained in a diversity-promoting environment (a). We fine-tune them with HiPPO on the Gather environment, but with an extra penalty on the velocity of the Center of Mass. This can be understood as a preference for cautious behavior. This requires adjustment of the sub-policies, which were trained with a proxy reward encouraging them to move as far as possible (and hence quickly). Fig. 4 shows the difference between fixing the sub-policies and only training a manager with PPO vs using HiPPO to simultaneously train a manager and fine-tune the skills. The two initially learn at the same rate, but HiPPO's ability to adjust to the new dynamics allows it to reach a higher final performance. In this paper, we examined how to effectively adapt hierarchical policies. We began by deriving a hierarchical policy gradient and approximation of it. We then proposed a new method, HiPPO, that can stably train multiple layers of a hierarchy. The adaptation experiments suggested that we can optimize pretrained skills for downstream environments, and learn emergent skills without any unsupervised pre-training. We also explored hierarchy from an information bottleneck point of view, demonstrating that HiPPO with randomized period can learn from scratch on sparsereward and long time horizon tasks, while outperforming non-hierarchical methods on zero-shot transfer. There are many enticing avenues of future work. For instance, replacing the manually designed bottleneck with a variational autoencoder with an information bottleneck could further improve HiPPO's performance and extend the gains seen here to other tasks. Also, as HiPPO provides a policy architecture and gradient expression, we could explore using meta-learning on top of it in order to learn better skills that are more useful on a distribution of different tasks. The key points in HRL are how the different levels of the hierarchy are defined, trained, and then re-used. In this work, we are interested in approaches that allow us to build temporal abstractions by having a higher level taking decisions at a slower frequency than a lower-level. There has been growing interest in HRL for the past few decades BID17 BID9, but only recently has it been applied to high-dimensional continuous domains as we do in this work .To obtain the lower level policies, or skills, most methods exploit some additional assumptions, like access to demonstrations (; BID4 BID10 BID13, policy sketches BID0, or task decomposition into sub-tasks (; BID16 . Other methods use a different reward for the lower level, often constraining it to be a "goal reacher" policy, where the signal from the higher level is the goal to reach BID6 BID3 BID20 . These methods are very promising for state-reaching tasks, but might require access to goal-reaching reward systems not defined in the original MDP, and are more limited when training on tasks beyond state-reaching. Our method does not require any additional supervision, and the obtained skills are not constrained to be goal-reaching. When transferring skills to a new environment, most HRL methods keep them fixed and simply train a new higher-level on top . Other work allows for building on previous skills by constantly supplementing the set of skills with new ones BID14, but they require a hand-defined curriculum of tasks, and the previous skills are never fine-tuned. Our algorithm allows for seamless adaptation of the skills, showing no trade-off between leveraging the power of the hierarchy and the final performance in a new task. Other methods use invertible functions as skills , and therefore a fixed skill can be fully over-written when a new layer of hierarchy is added on top. This kind of "fine-tuning" is promising, although they do not apply it to temporally extended skills as we are interested in here. One of the most general frameworks to define temporally extended hierarchies is the options framework BID17, and it has recently been applied to continuous state spaces BID2. One of the most delicate parts of this formulation is the termination policy, and it requires several regularizers to avoid skill collapse BID2 BID19. This modification of the objective may be difficult to tune and affects the final performance. Instead of adding such penalties, we propose having skills of a random length, not controlled by the agent during training of the skills. The benefit is two-fold: no termination policy to train, and more stable skills that transfer better. Furthermore, these works only used discrete action MDPs. We lift this assumption, and show good performance of our algorithm in complex locomotion tasks. The closest work to ours in terms of final algorithm is the one proposed by. Their method can be included in our framework, and hence benefits from our new theoretical insights. We also introduce two modifications that are shown to be highly beneficial: the random time-commitment explained above, and the notion of an information bottleneck to obtain skills that generalize better. Algorithm 1 Collect Rollout 1: Input: skills π θ l (a|s, z), manager π θ h (z|s), time-commitment bounds P min and P max, horizon H, and bottleneck function o = f (s) 2: Reset environment: s 0 ∼ ρ 0, t = 0. 3: while t < H do Sample time-commitment p ∼ Cat([P min, P max]) Sample skill z t ∼ π θ h (·|s t) 6: DISPLAYFORM0 Observe new state s t +1 and reward r t 9: end for 10:t ← t + p 11: end while 12: Output: DISPLAYFORM1 Input: skills π θ l (a|s, z), manager π θ h (z|s), horizon H, learning rate α while not done do for actor = 1, 2,..., N do Obtain trajectory with Collect Rollout Estimate advantagesÂ(a t, o t, z t) and To answer the posed questions, we evaluate our new algorithms on a variety of robotic navigation tasks. Each task is a different robot trying to solve the Gather environment , depicted in FIG2, in which the agent must collect apples (green balls, +1 reward) while avoiding bombs (red balls, -1 reward). This is a challenging hierarchical task with sparse rewards that requires agents to simultaneously learn perception, locomotion, and higher-level planning capabilities. We use 2 different types of robots within this environment. Snake is a 5-link robot with a 17-dimensional observation space and 4-dimensional action space; and Ant a quadrupedal robot with a 27-dimensional observation space and 8-dimensional action space. Both can move and rotate in all directions, and Ant faces the added challenge of avoiding falling over irrecoverably. DISPLAYFORM0 Gather Algorithm Initial Mass Dampening Inertia Friction Snake Flat PPO 2.72 3.16 (+16%) 2.75 (+1%) 2.11 (-22%) 2.75 (+1%) HiPPO, p = 10 4.38 3.28 (-25%) 3.27 (-25%) 3.03 (-31%) 3.27 (-25%) HiPPO random p 5.11 4.09 (-20%) 4.03 (-21%) 3.21 (-37%) 4.03 (-21%) Ant Flat PPO 2.25 2.53 (+12%) 2.13 (-5%) 2.36 (+5%) 1.96 (-13%) HiPPO, p = 10 3.84 3.31 (-14%) 3.37 (-12%) 2.88 (-25%) 3.07 (-20%) HiPPO random p 3.22 3.37 (+5%) 2.57 (-20%) 3.36 (+4%) 2.84 (-12%) Table 1. Zero-shot transfer performance of flat PPO, HiPPO, and HiPPO with randomized period. The performance in the initial environment is shown, as well as the average performance over 25 rollouts in each new modified environment. We try several different modifications to the base Snake Gather and Ant Gather environments. One at a time, we change the body mass, dampening of the joints, body inertia, and friction characteristics of both robots. The , presented in Table 1, show that HiPPO with randomized period Categorical([T min, T max]) not only learns faster initially on the original task, but it is also able to better handle these dynamics changes. In terms of the percent change in policy performance between the training environment and test environment, it is able to outperform HiPPO with fixed period on 6 out of 8 related tasks without even taking any gradient steps. Our hypothesis is that the randomized period teaches the policy to adapt to wide variety of scenarios, while its information bottleneck is able to keep separate its representations for planning and locomotion, so changes in dynamics aren't able to simultaneously affect both. Gather Algorithm Cosine Similarity max z =z kp π θ l (a t |o t, z) Table 2. Empirical evaluation of Lemma 1. On the right column we evaluate the quality of our assumption by computing what is the average largest probability of a certain action under other skills. On the left column we report cosine similarity between our approximate gradient and the gradient computed using Eq. 2 without approximation. In Lemma 1, we assumed that the sub-policies present ought to be diverse. This allowed us to derive a more efficient and numerically stable gradient. In this section, we empirically test the validity of our assumption, as well as the quality of our approximation. For this we run, on Snake Gather and Ant Gather, the HiPPO algorithm both from scratch and on some pretrained skills as described in the previous section. In Table 2, we report the average maximum probability under other sub-policies, corresponding to from the assumption. We observe that in all settings this is on the order of magnitude of 0.1. Therefore, under the p = 10 that we use in our experiments, the term we neglect has a factor p−1 = 10 −10. It is not surprising then that the average cosine similarity between the full gradient and the approximated one is almost 1, as also reported in Table 2. We only ran two random seeds of these experiments, as the seemed pretty consistent, and they are more computationally challenging to run. For all experiments, both PPO and HiPPO used learning rate 3 × 10 −3, clipping parameter = 0.1, 10 gradient updates per iteration, a batch size of 100,000, and discount γ = 0.999. HiPPO used n = 6 sub-policies. Ant Gather has a horizon of 5000, while Snake Gather has a horizon of 8000 due to its larger size. All runs used three random seeds. HiPPO uses a manager network with 2 hidden layers of 32 units, and a skill network with 2 hidden layers of 64 units. In order to have roughly the same number of parameters for each algorithm, flat PPO uses a network with 2 hidden layers with 256 and 64 units respectively. For HiPPO with randomized period, we resample p ∼ Uniform{5, 15} every time the manager network outputs a latent, and provide the number of timesteps until the next latent selection as an input into both the manager and skill networks. The single baselines and skill-dependent baselines used a MLP with 2 hidden layers of 32 units to fit the value function. The skill-dependent baseline receives, in addition to the full observation, the active latent code and the time remaining until the next skill sampling. Lemma 1. If the skills are sufficiently differentiated, then the latent variable can be treated as part of the observation to compute the gradient of the trajectory probability. Concretely, if π θ h (z|s) and π θ l (a|s, z) are Lipschitz in their parameters, and 0 < π θ l (a t |s t, z j) < ∀j = kp, then DISPLAYFORM0 Proof. From the point of view of the MDP, a trajectory is a sequence τ = (s 0, a 0, s 1, a 1, . . ., a H−1, s H). Let's assume we use the hierarchical policy introduced above, with a higher-level policy modeled as a parameterized discrete distribution with n possible outcomes π θ h (z|s) = Categorical θ h (n). We can expand P (τ) into the product of policy and environment dynamics terms, with z j denoting the jth possible value out of the n choices, DISPLAYFORM1 Taking the gradient of log P (τ) with respect to the policy parameters θ = [θ h, θ l], the dynamics terms disappear, leaving: DISPLAYFORM2 The sum over possible values of z prevents the logarithm from splitting the product over the p-step sub-trajectories. This term is problematic, as this product quickly approaches 0 as p increases, and suffers from considerable numerical instabilities. Instead, we want to approximate this sum of products by a single one of the terms, which can then be decomposed into a sum of logs. For this we study each of the terms in the sum: the gradient of a sub-trajectory probability under a specific latent ∇ θ π θ h (z j |s kp) (k+1)p−1 t=kp π θ l (a t |s t, z j). Now we can use the assumption that the skills are easy to distinguish, 0 < π θ l (a t |s t, z j) < ∀j = kp. Therefore, the probability of the sub-trajectory under a latent different than the one that was originally sampled z j = z kp, is upper bounded by p. Taking the gradient, applying the product rule, and the Lipschitz continuity of the policies, we obtain that for all z j = z kp, Thus, we can across the board replace the summation over latents by the single term corresponding to the latent that was sampled at that time. DISPLAYFORM3 ∇ θ log P (τ) = Interestingly, this is exactly ∇ θ P (s 0, z 0, a 0, s 1, . . .). In other words, it's the gradient of the probability of that trajectory, where the trajectory now includes the variables z as if they were observed. Lemma 2. For any functions b h: S → R and b l: S × Z → R we have: Then, we can write out the definition of the expectation and undo the gradient-log trick to prove that the baseline is unbiased. ∇ θ log π s,θ (a t |s t, z kp)b(s t, z kp)] = 0 DISPLAYFORM4 We'll follow the same strategy to prove the second equality: apply the same law of iterated expectations trick, express the expectation as an integral, and undo the gradient-log trick.
We propose HiPPO, a stable Hierarchical Reinforcement Learning algorithm that can train several levels of the hierarchy simultaneously, giving good performance both in skill discovery and adaptation.
900
scitldr
Learning can be framed as trying to encode the mutual information between input and output while discarding other information in the input. Since the distribution between input and output is unknown, also the true mutual information is. To quantify how difficult it is to learn a task, we calculate a observed mutual information score by dividing the estimated mutual information by the entropy of the input. We substantiate this score analytically by showing that the estimated mutual information has an error that increases with the entropy of the data. Intriguingly depending on how the data is represented the observed entropy and mutual information can vary wildly. There needs to be a match between how data is represented and how a model encodes it. Experimentally we analyze image-based input data representations and demonstrate that performance outcomes of extensive network architectures searches are well aligned to the calculated score. Therefore to ensure better learning outcomes, representations may need to be tailored to both task and model to align with the implicit distribution of the model. Sometimes perspective is everything. While the information content of encoded data may not change when the way it is represented changes, its usefulness can vary dramatically (see Fig. 1). A "useful" representation then is one that makes it easy to extract information of interest. This in turn very much depends on who or which algorithm is extracting the information. Evidently the way data is encoded and how a model "decodes" the information needs to match. Historically, people have invented a large variety of "data representations" to convey information. An instance of this theme is the heliocentric vs. geocentric view of the solar system. Before the heliocentric viewpoint was widely accepted, scholars had already worked out the movements of the planets . The main contribution of the new perspective was that now the planetary trajectories were simple ellipses instead of more complicated movements involving loops 1. In a machine learning context, many have experimented with finding good data representations for specific tasks such as speech recognition , different color spaces for face recognition , for increased robustness in face detection , and many others. Yet no clear understanding has emerged of why a given representation is more suited to one task but less for another. We cast the problem of choosing the data representation for learning as one of determining the ease of encoding the relationship between input and output which depends both on how the data is represented and which model is supposed to encode it. Contribution: In this work, we argue that learning a task is about encoding the relationship between input and output. Each model implicitly has a way of encoding information, where some variations in the data are easier to encode than others. Armed with this insight, we empirically evaluate different data representations and record what impact data representations have on learning outcomes and types of networks found by automated network optimization. Most interestingly, we are able Figure 1: These images contain different representations of the same information. However, one of the two is much easier for us to understand. We posit that, for our nervous system, one of the two images has a higher observed mutual information for the task of recognizing the person. to show that relative learning outcomes can be predicted by an empirical mutual information score, which we coin Observed Mutual Information (OMI) score. This work aims to bring us a bit closer to understanding what makes a given learning task easier or harder. While there appears to be little work on this question, a fresh-eyed excursion into what makes an optimization easier or harder has been ventured before. Data representations: Data representations have been optimized for a long time. In fact there is a rich theory of linear invertible representations for both finite and infinite dimensional spaces called Frame Theory . Specific popular examples of frames are Wavelets and Curvelets . Empirically tested only on Imagenet, Uber research showed that using a data representation closer to how JPEG encodes information, may help to create faster residual network architecture with slightly better performance. In a similar spirit in a robotics context, Grassmann and Kahrs evaluated learning performance on approximating robot dynamics using various common robot dynamics data representations such as Euler angles. What is more common in deep learning is to adapt the network architecture to the task at hand. An intriguing recent example taking this idea a step further are Weight Agnostic Neural Networks which have been designed to already "function" on a task even when starting from randomly initialized weights. Measuring learning difficulty: There appears to be little newer literature on the question, yet already posed the question of how to measure how easy or difficult a learning task is in the nineties. Similar to our own findings, they related the difficulty to information theoretic measures called the information gain (mutual information) and the information gain ratio (very similar to our proposed OMI value) introduced in the context of decision trees by Quinlan (1986; 2014). Sadly, this interesting line of inquiry does not appear to have received much attention since. take a different road by comparing several possible scores to assess the difficulty of classification learning problems such as linear separability and feature efficiency. More commonly, instead of judging task difficulty, there is a vast literature on feature selection , e.g. judging how suitable a feature is for a given learning problem. Desirable features are reliably selected for a learning task (Meinshausen & Bühlmann, 2010) and ideally are highly predictive of the output variable. Relating the overall difficulty of selecting good features to how difficult a learning task is, has not been established to our understanding. The objective of learning can be phrased as finding a function that minimizes the uncertainty of the output given the input while discarding as much task irrelevant information as possible. In information theoretic language, this viewpoint was introduced by and extended by in the form of the objective of the Information Bottleneck (IB) Lagrangian. Given an input x, a model encoding z, an output y and mutual information I(·; ·), the aims to minimize the following cost function: L(p(z|x)) = I(x; z) − βI(y; z) The model is supposed to find an encoding z of data x that maximizes the mutual information to output y while also minimizing the mutual information with x. It becomes apparent that the "ideal" learning task is one where all of the information in x is highly relevant to output y. In this case minimizing I(z; x) becomes unnecessary and the learning algorithm can place all of its focus on simply associating x with y. A highly difficult task would be one where I(x; y) is very small in absolute terms or where estimating I(x; y) is difficult for the chosen model. An attractive option to evaluate how difficult a learning task is thus is measuring its mutual information I(x; y). Empirically estimating the mutual information however leads to errors that have a bias with a bound proportional to the entropy H(x). To adjust, we divide by the entropy term, leading to a score of mutual information which takes into account the uncertainty of the estimate. In the following we bound the deviation of the true mutual information from the estimated mutual information. This estimated mutual information we coin Observed Mutual Information since it is the information, dependent on representation and model, that we may ultimately be able to extract from the data. We begin by stating a of bounding the error in entropy estimation. Lemma 1 . For a discrete random variable x ∈ X, with the plug-in estimateĤ(·) on its entropy, based on an i.i.d sample of size m, we have that Using the above bound we are able to bound the error in mutual information estimation, in a particular regime which is relevant for several applications of interest, such as object detection. Definition 1 (Distillation regime). In the distillation regime we assume that: i The samples x have very high entropy H(x). ii The entropy of y is small with respect to the entropy of x. iii The number of samples m of x that we have is small compared to H(x). Example 1. Typical object detection tasks are in the distillation regime. The entropy of images is high (property 1), while labels are compactly represented (property 2). Furthermore, the number of image samples is small compared to all possible samples (property 3). Lemma 2 (Mutual information bias). For discrete random variables x ∈ X, y ∈ Y, with the plug-in estimatesĤ(·) on their entropy, based on an i.i.d sample of size m, we bound the deviation of the estimated mutual information to the true mutual information in the distillation regime and conclude Proof. We start with a similar left-hand side as Ineq. 1 and expand the mutual information with entropy terms: Using the stated assumptions and applying Ineq. 1 we arrive at the following bound using the asymptotic equipartition property (AEP) |X | ≤ 2 H(x)+ (Cover&T homas, 2012): A similar relationship can be extracted from the information bottleneck bound . Theorem 3 (Theorem 4,). For any probability distribution p(x, y), with a probability of at least 1 − δ over the draw of the sample of size m from p(x, y), we have that for all bottleneck variables z, Again we consider the case where |Y| is small, |X | is large, and where the learned bottleneck variable z captures the output variable y exactly (hence |Z| is small). Then I(x; z) = H(x) −. For large terms H(x) the above bound becomes dominated byĤ(x) s.t. (for fixed |Y|, m, δ) When it comes to mutual information, we care about a high mutual information between input and output, but also about a reliable estimate of this mutual information. From the above calculations, we distill that the estimated entropyĤ(x) has a decisive effect on the uncertainty of the achievable mutual information between bottleneck variable z and output y. As a metaphor imagine the task of predicting the birthrate in Mongolia from the movements of the stock market for a given month. Most certainly one will be able to correlate the two. This is a common occurrence called spurious correlation making us fools of randomness . Hence we arrive at a score for mutual information that captures both the magnitude of the estimated mutual information and an estimate of the uncertainty of this estimate. Definition 2 (OMI). Given random variables x and y, empirical mutual informationÎ(x; y), and empirical entropyĤ(x), then the observed mutual information score (OMI) is defined as A similar thought process of introducing the same mutual information score has long been used for selecting features in decision trees . While we base our score on a whole dataset of (x, y)-pairs, the information gain ratio (IGR) for decision trees are feature-level criteria. Having defined our focus on mutual information and its associated OMI values, we turn our attention to the effect data representations may have on the learning process. We begin by defining what we mean when we talk about a data representation. Definition 3. A data representation r ∈ R is the output of a left-invertible mapping m(·): X → R applied to the "original" data x ∈ X. Therefore, all data representations are in a sense "equivalent". Since they are invertible they share the same information content I(x; y) = I(m(x); y)). Yet this is only half true. Clearly, how data is represented does make a difference. As a "worst case", an encrypted version of a dataset is unlikely to be useful for a learning task. In a perfect setting, a dataset could be represented in a way that directly maps to the desired output while keeping around additional "bits" needed to reconstruct the input. In such a case, learning would only require disregarding the extra dimensions. To understand what impact a data representation may have we will employ the idea of expected coding length E[l(x)] and focus on what happens when we choose the "wrong code". From Information Theory , we learn that the most efficient encoding we can possibly find is lower bounded by the entropy of the distribution we are trying to compress. In this case, we assume that we have a candidate distribution q(x) that we are trying to fit to the true distribution p(x). The expected coding length of our candidate distribution can then never be smaller than the entropy of the true distribution : In the following we will assume that any function family F has an associated candidate distribution q(·) through which it measures how uncertain a variable is, e.g. linear regression uses a normal distribution and assesses uncertainty via the determinant of the covariance matrix. The difficulty with assuming a candidate distribution is that it data may not follow the same distribution. Given such a mismatch, the model will overestimate the entropy of the distribution as shown in theorem 2. Theorem 5 (Representation-Model-Alignment). Assuming a candidate distribution q(·) and representations r 1, r 2 with D(p(r 1)||q(r 1)) > D(p(r 2)||q(r 2)) + 1 we have that Proof. From theorem 2 we know that A consequence of the above theorem is that the representation of the data distribution influences the observed entropy by changing the alignment of the true distribution to the assumed candidate distribution of the model. Critically for real-world situations, the wrong code theorem invalidates the assumption that the estimated mutual information does not change when an invertible transformation is applied. The true mutual information does indeed not change, yet this would be purely anecdotal if one were to encrypt the data and tried to learn from it. Changing data representations (invertible) can better align true task and assumed model distribution in the sense of minimizing relative entropy between candidate q(·) and true distribution p(·). This then has direct influence on the observed mutual information and the OMI score. The closer representation r aligns the true distribution to the model candidate distribution, the smaller the data entropiesĤ(r) andĤ(y|r) will be. In this sense there are thus better and worse data representations given a model and learning task. The theoretical findings we presented in the previous sections have to be verified in the real world. We therefore conducted a wide ranging empirical study on their relevance over a number of different datasets and network architectures. In total we evaluated over 8000 networks of varying sizes. We provide the full code and data required to replicate our experiments at the following URL: https://drive.google.com/open?id=1D8wICzJVPJRUWB9y5WgceslXZfurY34g To have a chance that our findings are applicable beyond the scope of this publication we chose a diverse set of four datasets that capture different vision tasks. We chose two datasets for classification (KDEF and Groceries) and two for regression (Lane Following and Drone Racing). Sample images for each of the datasets can be found in the appendix. Lane Following (LF): We generated this dataset using the Duckietown Gym , which is a simulation environment used for the AI Driving Olympics placed in Duck-ietown 2. It is the only simulated dataset we used in our experiments. Data is generated using a pure pursuit lane following algorithm, that returns the desired angular velocity. Longitudinal velocity is assumed to be constant. The learned model has to predict the angular velocity returned by the expert based on the image of the lane. Both train and test sets include domain randomization. To reduce the cost of training we downsampled each of the images to a third of their original size. KDEF: This dataset is based on an emotion recognition dataset by. Each of the images shows male and female actors expressing one of seven emotions. Images are captured from a number of different fixed viewpoints and centered on the face. To add more diversity to the data we added small color and brightness perturbations and a random crop. Moreover, since the dataset provides few samples, we downsampled each of the images to a sixth of their original size. Drone Racing: This dataset is based on the Drone Racing dataset by. We use the mDAVIS data from subsets 3, 5, 6, 9, and 10. While the original dataset provides the full pose (all six DOF), we train our feedforward networks to recover only the rotational DOF (roll, pitch, and yaw) from grayscale images. We matched the IMU data, which is sampled at 1000Hz to the timestamp for each grayscale image captured at 50Hz using linear interpolation. Since the images do not have multiple color channels we did not investigate a separate YCbCr or PREC representation for this dataset. We use the Freiburg Groceries Dataset and their original "test0/train0" split. Our only modifications are that we reserve a random subset of the test data for the evaluation of our hyperparameter optimization and that we reduce the size of the images from 256 × 256 to 120 × 120 pixels. Each of the images has to be classified into one of 25 categories. To estimate the OMI values, we calculated the individual entropiesĤ(x),Ĥ(y), andĤ(x, y). To computeĤ(x) andĤ(x, y), we assume a multivariate Gaussian distribution on the variables. The entropy is then computed asĤ(x) = 1 2 log(det(2πeΣ)), where Σ denotes the covariance matrix of x. To calculate this we apply an SVD decomposition and use the sum of log singular values to estimate the entropy. To avoid severe underestimation (entropy estimation is biased to underestimate ), entropy values are lower bounded by 2. Entropy values for y are calculated the same way for the regression task. For the classification task, we assume a multinoulli distribution and estimate the entropy of y accordingly. Manually tuning the hyperparameters for neural networks is both time consuming and may introduce unwanted bias into experiments. There is a wide range of automated methods available to mitigate these flaws . We utilize Bayesian optimization to find the set of optimal hyperparameters for each network. Since the initial weights of our networks are sampled from a uniform random distribution we can expect the performance to fluctuate between runs. Due to its probabilistic approach Bayesian optimization can account for this uncertainty . Moreover, it can optimize categorical and continuous dimensions concurrently. The dimensions and their constraints were chosen to be identical for each representation but were adapted to each dataset. For an in-depth introduction to Bayesian optimization we refer to. More information on our particular implementation of Bayesian optimization can be found in the appendix. We investigated three basic architectures. Our optimization was constrained to the same domain for all representations of a dataset for each of the network architectures. The full list of constraints for each network and dataset can be found in the code accompanying this paper. The initial learning rate was optimized for all architectures. We use a variable number of convolutional layers , with or without maxpooling layers between them, followed by a variable number of fully connected layers. Moreover, kernel sizes and number of filters are also parametrized. Dense Neural Networks: These networks consist of blocks of variably sized fully connected layers. We optimize the activation function after each layer as a categorical variable. ResNets: The Residual Neural Networks or ResNets are made up of a variable number of residual layers. Each of the layers contains a variable number of convolutions which themselves are fully parametrized. While one could select an arbitrary number of representations for images, we limit ourselves to five which have previously been used in image processing. Our focus is not on findings the best representations but to show how sensitive learning processes are to the representation of input data. Sample images for each of the representations can be found in the appendix. RGB: RGB is likely the representation we use most in our everyday life. Almost all modern displays and cameras capture or display data as an overlay of red, green and blue color channels. For simplicity we refer to the grayscale images of the Drone Racing dataset as being "RGB". The YCbCr representation is used in a number of different image storage formats such as JPEG and MPEG (David S.). It represents the image as a combination of a luminance and two chrominance channels. It is useful for compression because the human eye is much less sensitive to changes in chrominance than it is to changes in luminance. PREC: This representation partially decorrelates the color channels of the image based on previous work on preconditioning convolutional neural networks. For an image x ∈ R m×n×c with c channels we first calculate the expected value of the covariance between the channels for each image in the dataset: c×c, where x ij ∈ R c is the channel vector at pixel (i, j) of image x ∈ D. We then solve the eigenvalue problem for Σ obtaining real eigenvalues Λ and V containing the eigenvectors. A small is added for numerical stability. u is stored in memory and consecutively applied to each image in the dataset. We get U = diag(Λ + I) − 1 2 V which yields x prec = x rgb * U. The 2D type II discrete cosine transform (DCT) is a frequency-based representation. Low frequency coefficients are located in the top left corner of the representation and horizontal/vertical frequencies increase towards the right or down, respectively. This representation applies the DCT transform to each of the channels separately. DCT has been used extensively for face detection and all its coefficients bar one are invariant to uniform changes in brightness . Block DCT: Unlike for the DCT representation we apply the discrete cosine transform not to the whole image but to 8×8 non-overlapping patches of each of the channels. This exact type of DCT is widely used in JPEG compression by quantizing the coefficients of the DCT and applying Huffman encoding (David S.). 3.6 TRAINING Each network was trained using an Adam optimizer . Training was terminated when there were more than 7 previous epochs without a decrease in loss on the validation set or after 30 epochs were reached. After evaluating a total of 5753 networks, 3702 of which finished training, we have verified our intuition that representations are important. We see a fairly consistent pattern over all datasets of RGB and YCbCr being the best representations, followed by PREC and blockwise DCT, while DCT falls short (see Fig. 3). Moreover, we observe the great importance of hyperparameters in the spread of for each network architecture. Had we chosen to hand-tune our parameters and accidentally picked a very poor performing network for the RGB representation we could have easily come to the that DCT achieves better on all datasets. As we predicted, the performance of a representations also depends greatly on the network architecture. This is especially visible for the lane following dataset. We can see that, surprisingly, dense networks are among the best for both RGB and YCbCr, while they fall far behind on all other representations. The OMI scores we proposed show strong correlation with the we obtained from architecture search. They can even predict the comparatively small differences between the other representations with reasonable accuracy (see Fig. 2). It is unclear why the prediction fails for some of the datasets, especially the linear score on the KDEF dataset. Overall, we observe the significant correlated effect representation has on estimated entropy, OMI scores as well as performance. This work started by trying to pave a way to the exciting task of determining the difficulty of a learning task. To achieve this we introduced OMI, as a score that takes both estimated mutual information and possible variance of such an estimate into account. If the score proves itself also in future works, it may serve as a useful tool for automatic representation or architecture search. As outlined, this will depend to a great deal on how well we understand the candidate distributions of network architectures and representations that are currently in use. Similarly, it will be beneficial to study further techniques on removing the bias in the estimation of mutual information and entropy. We have shown that in many problems of interests the naive computation of mutual information is biased and representation dependent; our OMI score partially remove this bias. This score also now provides a criterion to evaluate different representations in a principled way. To narrate Tab. 2 we note that that estimating small entropy values is very error prone. When assuming a normal distribution, the entropy is calculated via the sum of the logarithm of eigenvalues of the covariance matrix of the data. The conditioning of the logarithm however gets worse, the closer its argument is to zero. Eigenvalues close enough to zero are thus likely to carry a significant error when used for entropy computation. This all is to say that while purely mathematically it is impossible to have negative mutual information values, numerically such things are bound to happen when dealing with small eigenvalues as is prominent with the DCT representation. While there have been some theoretical proposals that would allow Bayesian optimization to be run in parallel asynchronously , we restrict ourselves to a simple form of batch parallelization evaluating n = 6 points in parallel. We acquire the points by using the minimum constant liar strategy . The base estimator is first used after 10 points have been evaluated. Our acquisition function is chosen at each iteration from a portfolio of acquisition functions using a GP-Hedge strategy, as proposed in . We optimize the acquisition function by sampling it at n points for categorical dimensions and 20 iterations of L-BFGS for continuous dimensions. Since the optimizer has no information about the geometric properties of the network or if the network can fit in the systems memory, some of the generated networks cannot be trained. Two common modes of failure were too many pooling layers (ing in a layer size smaller than the kernel of subsequent layers) and running out of memory, which was especially prevalent for dense networks. In our experiments we observed that roughly 35% of all networks did not complete training. To stop the Bayesian optimizer from evaluating these points again we reported a large artificially generated loss to the optimizer at the point where the network crashed. The magnitude of this loss was chosen manually for each dataset to be roughly one order of magnitude larger than the expected loss. The influence of this practice will have to be investigated in future research.
We take a step towards measuring learning task difficulty and demonstrate that in practice performance strongly depends on the match of the representation of the information and the model interpreting it.
901
scitldr
Sepsis is a life-threatening complication from infection and a leading cause of mortality in hospitals. While early detection of sepsis improves patient outcomes, there is little consensus on exact treatment guidelines, and treating septic patients remains an open problem. In this work we present a new deep reinforcement learning method that we use to learn optimal personalized treatment policies for septic patients. We model patient continuous-valued physiological time series using multi-output Gaussian processes, a probabilistic model that easily handles missing values and irregularly spaced observation times while maintaining estimates of uncertainty. The Gaussian process is directly tied to a deep recurrent Q-network that learns clinically interpretable treatment policies, and both models are learned together end-to-end. We evaluate our approach on a heterogeneous dataset of septic spanning 15 months from our university health system, and find that our learned policy could reduce patient mortality by as much as 8.2\% from an overall baseline mortality rate of 13.3\%. Our algorithm could be used to make treatment recommendations to physicians as part of a decision support tool, and the framework readily applies to other reinforcement learning problems that rely on sparsely sampled and frequently missing multivariate time series data. Sepsis is a poorly understood complication arising from infection, and is both a leading cause in patient mortality BID8 ) and in associated healthcare costs BID37 ). Early detection is imperative, as earlier treatment is associated with better outcomes BID33, BID21 ). However, even among patients with recognized sepsis, there is no standard consensus on the best treatment. There is a pressing need for personalized treatment strategies tailored to the unique physiology of individual patients. Guidelines on sepsis treatment previously centered on early goal directed therapy (EGDT) and more recently have focused on sepsis care bundles, but none of these approaches are individualized. Before the landmark publication on the use of early goal directed therapy BID31 ), there was no standard management for severe sepsis and septic shock. EGDT consists of early identification of high-risk patients, appropriate cultures, infection source control, antibiotics administration, and hemodynamic optimization. The study compared a 6-hour protocol of EGDT promoting use of central venous catheterization to guide administration of fluids, vasopressors, inotropes, and packed red-blood cell transfusions, and was found to significantly lower mortality. Following the initial trial, EGDT became the cornerstone of the sepsis resuscitation bundle for the Surviving Sepsis Campaign (SCC) and the Centers for Medicare and Medicaid Services (CMS) BID6 ).Despite the promising of EGDT, concerns arose. External validity outside the single center study was unclear, it required significant resources for implementation, and the elements needed to achieve pre-specified hemodynamic targets held potential risks. Between 2014-2017, a trio of trials reported an all-time low sepsis mortality, and questioned the continued need for all elements of EGDT for patients with severe and septic shock BID28, , BID27 ). The trial authors concluded EGDT did not improve patient survival compared to usual care but was associated with increased ICU admissions BID0 ). As a , they did not recommend it be included in the updated SCC guidelines BID30 ).Although the SSC guidelines provide an overarching framework for sepsis treatment, there is renewed interest in targeting treatment and disassembling the bundle BID22 ). A recent metaanalysis evaluated 12 randomized trials and 31 observational studies and found that time to first an-tibiotics explained 96-99% of the survival benefit BID17 ). Likewise, a study of 50,000 patients across the state of New York found mortality benefit for early antibiotic administration, but not intravenous fluids BID33 ). Beyond narrowing the bundle, there is emerging evidence that a patient's baseline risk plays an important role in response to treatment, as survival benefit was significantly reduced for patients with more severe disease BID17 ).Taken together, the poor performance of EGDT compared to standard-of-care and improved understanding of individual treatment effects calls for re-envisioning sepsis treatment recommendations. Though general consensus in critical care is that the individual elements of the sepsis bundle are typically useful, it is unclear exactly when each element should be administered and in what quantity. In this paper, we aim to directly address this problem using deep reinforcement learning. We develop a novel framework for applying deep reinforcement learning to clinical data, and use it to learn optimal treatments for sepsis. With the widespread adoption of Electronic Health Records, hospitals are already automatically collecting the relevant data required to learn such models. However, real-world operational healthcare data present many unique challenges and motivate the need for methodologies designed with their structure in mind. In particular, clinical time series are typically irregularly sampled and exhibit large degrees of missing values that are often informatively missing, necessitating careful modeling. The high degree of heterogeneity presents an additional difficulty, as patients with similar symptoms may respond very differently to treatments due to unmeasured sources of variation. Alignment of patient time series can also be a potential issue, as patients admitted to the hospital may have very different unknown clinical states and can develop sepsis at any time throughout their stay (with many already septic upon admission).Part of the novelty in our approach hinges on the use of a Multi-output Gaussian process (MGP) as a preprocessing step that is jointly learned with the reinforcement learning model. We use an MGP to interpolate and to impute missing physiological time series values used by the downstream reinforcement learning algorithm, while importantly maintaining uncertainty about the clinical state. The MGP hyperparameters are learned end-to-end during training of the reinforcement learning model by optimizing an expectation of the standard Q-learning loss. Additionally, the MGP allows for estimation of uncertainty in the learned Q-values. For the model architecture we use a deep recurrent Q-network, in order to account for the potential for non-Markovian dynamics and allow the model to have memory of past states and actions. In our experiments utilizing EHR data from septic patients spanning 15 months from our university health system, we found that both the use of the MGP and the deep recurrent Q-network offered improved performance over simpler approaches. In this section we outline important that motivates our improvements on prior work. Reinforcement learning (RL) considers learning policies for agents interacting with unknown environments, and are typically formulated as a Markov decision process (MDP) BID36 ). At each time t, an agent observes the state of the environment, s t ∈ S, takes an action a t ∈ A, and receives a reward r t ∈ R, at which time the environment transitions to a new state s t+1. The state space S and action space A may be continuous or discrete. The goal of an RL agent is to select actions in order to maximize its return, or expected discounted future reward, defined as R t = T t =t γ t −t r t, where γ captures tradeoff between immediate and future rewards. Q-Learning BID40 ) is a model-free off-policy algorithm for estimating the expected return from executing an action in a given state. The optimal action value function is the maximum discounted expected reward obtained by executing action a in state s and acting optimally afterwards, defined as Q * (s, a) = max π E[R t |s t = s, a t = a, π], where π is a policy that maps states to actions. Given Q *, an optimal policy is to act by selecting argmax a Q * (s, a). In Q-learning, the Bellman equation is used to iteratively update the current estimate of the optimal action value function according to Q(s, a). = Q(s, a) + α(r + γmax a Q(s, a) − Q(s, a)), adjusting towards the observed reward plus the maximal Q-value at the next state s. In Deep Q-learning a deep neural network is used to approximate Q-values BID24 ), overcoming the issue that there may be infinitely many states if the state space is continuous. Denoting the parameters of the neural network by θ, Q-values Q(s, a|θ) are now estimated by performing a forward pass through the network. Updates to the parameters are obtained by minimizing a differentiable loss function, DISPLAYFORM0 2, and training is usually accomplished with stochastic gradient descent. A fundamental limiting assumption of Markov decision processes is the Markov property, which is rarely satisfied in real-world problems. In medical applications such as our problem of learning optimal sepsis treatments, it is unlikely that a patient's full clinical state will be measured. A Partially Observable Markov Decision Process (POMDP) better captures the dynamics of these types of realworld environments. An extension of an MDP, a POMDP assumes that an agent does not receive the true state of the system, instead receiving only observations o ∈ Ω generated from the underlying system state according to some unknown observation model o ∼ O(s). Deep Q-learning has no reliable way to learn the underlying state of the POMDP, as in general Q(o, a|θ) = Q(s, a|θ), and will only perform well if the observations well reflect the underlying state. Returning to our medical application, the system state might be the patient's unknown clinical status or disease severity, and our observations in the form of vitals or laboratory measurements offer some insight into the state. The Deep Recurrent Q-Network (DRQN) BID13 ) extends vanilla Deep Qnetworks (DQN) by using recurrent LSTM layers BID15 ), which are well known to capture long-term dependencies. LSTM recurrent neural network (RNN) models have frequently been used in past applications to medical time series, such as BID23. In our experiments we investigate the effect of replacing fully connected neural network layers with LSTM layers in our Q-network architecture in order to test how realistic the Markov assumption is in our application. Multi-output Gaussian processes (MGPs) are commonly used probabilistic models for irregularly sampled multivariate time series, as they seamlessly handle variable spacing, differing numbers of observations per series, and missing values. In addition, they maintain estimates of uncertainty about the state of the series. MGPs have been frequently applied to model patient physiological time series, e.g. BID12, BID7,.Given M time series (physiological labs/vitals), an MGP is specified by a mean function for each series {µ m (t)} M m=1, commonly assumed to be zero, and a covariance function or kernel K. Letting f m (t) denote the latent function for series m at time t, then K(t, t, m, m) = cov(f m (t), f m (t)). Typically the actual observations are centered on the latent functions according to some distribution, e.g. DISPLAYFORM0 noise parameters. We use the linear model of coregionalization covariance function with an Ornstein-Uhlenbeck base kernels k(t, t) = e −|t−t |/l to flexibly model temporal correlations in time as well as covariance structure between different physiological variables. For each patient, letting t denote the complete set of measurement times across all observations, the full joint kernel is K(t, t) = P p=1 B p ⊗ k p (t *, t *), where P denotes the number of mixture kernel. t * denotes the time vector for each physiological sign, assumed here to be the same for notational convenience, but in practice the full kernel need only be computed at the observed variables. Each B p ∈ R M ×M encodes the scale covariance between different time series. We found that P = 2 works well in practice and allows learning of correlations on both short and long time scales. Given the MGP kernel hyperparameters shared across all patients, collectively referred to as η, imputation and interpolation at arbitrary times can be computed either using the posterior mean or the full posterior distribution over unknown function values. Multi-output Gaussian processes and recurrent neural networks can be combined and trained endto-end (MGP-RNNs), in order to solve supervised learning problems for sequential data BID10, BID11 ). This methodology was shown to exhibit superior predictive performance at early detection of sepsis from clinical time series data, when compared with vanilla RNNs with last-one-carried-forward imputation. In fitting the two models end-to-end, the MGP hyperparameters are learned discriminatively, in essence learning an imputation and interpolation mechanism tuned for the supervised task at hand. Learning an MGP-RNN consists of minimizing an expectation of some loss function, with respect to the posterior distribution of the MGP. Letting z denote a set of latent time series values distributed according to an MGP posterior, and g(z) denote the prediction(s) made by an RNN from this time series, then the goal is to minimize E z∼MGP [l(o, g(z) )], where l is some loss function (e.g. crossentropy for a classification task) and o is the true label(s). We can express the MGP distributed latent variable z as z = µ z + R z ξ, where µ z is the posterior mean and R z R z = Σ z with Σ z the posterior covariance, and ξ ∼ (0, I). This allows us to apply the reparameterization trick BID19 ) and use Monte Carlo sampling to compute approximate gradients of this expectation with respect to both MGP hyperparameters η and RNN parameters θ, so that the loss can be minimized via stochastic gradient descent. The stochasticity in this learning procedure introduced from the Monte Carlo sampling additionally acts as a form of regularization, and helps prevent the RNN from overfitting. In Section 3 we show how this can be applied to a reinforcement learning task. There has been substantial recent interest in development of machine learning methodologies motivated by healthcare data. However, most prior work in clinical machine learning focuses on supervised tasks, such as diagnosis BID9 ) or risk stratification BID10 ). Many recent papers have developed models for early detection of sepsis, a related problem to our task of learning treatments for sepsis, e.g. BID35, BID14, BID11. However, as supervised problems rely on ground truth they cannot be applied to treatment recommendation, unless the assumption is made that past training examples of treatments represent optimal behavior. Instead, it is preferable to frame the problem using reinforcement learning in order to learn optimal treatment actions from data collected from potentially suboptimal actions. While deep reinforcement learning has seen huge success over the past few years, only very recently have reinforcement learning methods been designed with healthcare applications in mind. Applying reinforcement learning methods to healthcare data is difficult, as it requires careful consideration to set up the problem, especially the rewards. Furthermore, it is typically not possible to collect additional data and so evaluating learned policies on retrospective data presents a challenge. Most related to this paper are BID29 and BID20, who also look at the problem of learning optimal sepsis treatments. We build off of their work by using a more sophisticated network architecture that takes into account both memory through the use of DRQNs and uncertainty in time series imputation and interpolation using MGPs. Other relevant work includes BID26, who use a simpler learning algorithms to learn optimal strategies for ventilator weaning, and BID25, who also use a deep RL approach for modeling ICU heparin dosing as a POMDP with discriminative hidden Markov models and Q-networks. There also exists a rich set of work from the statistics and causal inference literature on learning dynamic treatment regimes, e.g. BID3, BID34, although the models are typically fairly simple for ease of interpretability. We now introduce Multi-Output Gaussian Process Deep Recurrent Q-Networks, or MGP-DRQNs, a novel reinforcement learning algorithm for learning optimal treatment policies from noisy, sparsely sampled, and frequently missing clinical time series data. We assume a discrete action space, a ∈ A = {1, . . ., A}. Let x denote T regularly spaced grid times at which we would like to learn optimal treatment decisions. Given a set of clinical physiological time series y that we assume to be distributed according to an MGP, we can compute a posterior distribution for z t |y, the latent unobserved time series values at each grid time. The loss function we optimize is similar to in normal deep Q-learning, with the addition of the expectation due to the MGP and the fact that we compute the loss over full patient trajectories. In particular, we learn optimal DRQN parameters θ * and MGP hyperparameters η * via: DISPLAYFORM0 where the t'th target value is Q (t) target = r t + γmax a Q([z t+1, s t+1], a ), the outer expectation is over training samples, and the inner one is with respect to the MGP posterior for one patient. We concatenate the two separate types of model inputs at time t, with z t denoting latent variables distributed according to an MGP posterior from other relevant inputs to the model denoted s t, such as static baseline covariates. In Section 4.1 we go into detail on the particular variables included in s t.We use a Dueling Double-Deep Q-network architecture, similar to BID29. The Dueling Q-network architecture BID39 has separate value and advantage streams to separate the effect of a patient being in a good underlying state from a good action being taken. The Double-Deep Q-network architecture BID38 ) helps correct overestimation of Q-values by using a second target network to compute the Q-values in the target Q target. Finally, we use Prioritized Experience Replay in order to speed learning, so that patient encounters with higher training error will be resampled more frequently. We use 2 LSTM layers with 64 hidden units each that feed to a final fully connected layer with 64 hidden units, before splitting into equally sized value and advantage streams that are finally then projected onto the action space to obtain Q-value estimates. We implemented our methods in Tensorflow using the Adam optimizers BID18 ) with minibatches of 50 encounters sampled at a time, a learning rate of 0.001, and L 2 regularization on weights. We use 25 Monte Carlo samples from the MGP for each sampled encounter in order to approximate the expected loss and compute approximate gradients, and these samples and other inputs are fed in a forward pass through the DRQN to get predictions Q(s, a). We will release source code via Github after the review period. In this section we first describe the details of our dataset of septic patients before highlighting how the experiments were set up and how the algorithms were evaluated. Our dataset consists of information collected during 9,255 patient encounters ing in sepsis at our university hospital, spanning a period of 15 months. We define sepsis to be the first time at which a patient simultaneously had persistently abnormal vitals (as measured by a 2+ SIRS score, BID2), a suspicion of infection (as measured by an order for a blood culture), and an abnormal laboratory value indicative of organ damage. This differs from the new Sepsis-3 definition BID32 ), which has since been largely criticized for its detection of sepsis late in the clinical course . We break the full dataset into 7867 training patient encounters and reserve the remaining 1388 for testing. We discretize the data to learn actions in 4 hour windows. We emphasize that the raw data itself is not down-sampled; rather, we use the MGP to learn a posterior for the time series values every 4 hours. Actions for the RL setup consist of 3 treatments commonly given to septic patients: antibiotics, vasopressors, and IV fluids. Antibiotics and vasopressors are broken down into 3 categories, based on whether 0, 1, or 2+ were administered in each 4 hour window. For IV Fluids, we consider 5 discrete categories: either 0, or one of 4 aggregate doses based on empirical quartiles of total fluid volumes. This yields a discrete action space with 3 × 3 × 5 = 45 distinct actions. Our data consists of 36 longitudinal physiological variables (e.g. blood pressure, pulse, white blood cell count), 2 longitudinal categorical variables, and 38 variables available at baseline (e.g. age, previous medical conditions). 8 medications tangential to sepsis treatment are included as inputs to MGP-DRQN, as well as an indicator for which of the 45 actions was administered at the last time. Additionally, 36 indicator variables for whether or not each lab/vital was recently sampled allows the model to learn from informative sampling due to non-random missingness. In total, there are 165 input observation variables to each of the Q-network models at each time. Our outcome of interest is mortality within 30 days of onset of sepsis. We use a sparse reward function in this initial work, so that the reward at every non-terminal time point is 0, with a reward of ±10 at the end of a trajectory based on patient survival/death. Although this presents a challenging credit assignment problem, this allows for data to inform what actions should be taken to reduce chance of death without being overly prescriptive. We use SARSA, an on-policy algorithm, to estimate state-action values for the physician policy. We compare a number of different architectures for learning optimal sepsis treatments. In addition to our proposed MGP-DRQN, we compare against MGP-mean-DRQN, a variant where we move the posterior expectation inside the DRQN loss function, meaning we use the posterior mean of the MGP rather than use Monte Carlo samples from the MGP. We also compare against a DRQN with identical architecture, but replace the MGP with last-one-carried-forward imputation to fill in any missing values, and use the mean if there are multiple measurements. We also compare against a vanilla DQN, a MGP-DQN, and a MGP-mean-DQN, with an equivalent number of layers and parameters, to test the effect of the recurrence in the DRQN models. We use Doubly Robust Off-policy Value Evaluation BID16 ) to compute unbiased estimates of each learned optimal policy using our observed off-policy data. For each patient trajectory in the test set we estimate its value using this method, and the average . In order to apply this method we train an MGP-RNN to estimate the action probabilities of the physician policy. In FIG0 we show the of using SARSA to estimate expected returns for the physician policy on the test data. The Q-values appear to be well calibrated with mortality, as patients who were estimated to have higher expected returns tended to have lower mortality. Due to small sample sizes for very low expected returns, the mortality rate does not always monotonically decrease. We can estimate the potential reduction in mortality a learned policy might have by computing an unbiased estimate of the policy value, as described in Section 4.3, and then use the in FIG0. Table 1 contains the policy value estimates for each algorithm considered, along with estimated mortality rates. The physician policy has an estimated value of 5.52 and corresponding mortality of 13.3%, matching the observed mortality in the test set of 13.3%. Overall the MGP-DRQN performs and might reduce mortality by as much as 8%. The DRQN architectures tended to yield higher expected returns, probably because they are able to retain some memory of past clinical states and actions taken. The MGP consistently improved as well, and the additional uncertainty information contained in the full MGP posterior appeared to do better than the policies that only used the posterior mean. Expected Return Estimated Mortality Physician 5.52 13.3 ± 0.7% MGP-DRQN 7.51 5.1 ± 0.5% MGP-mean-DRQN 6.97 6.6 ± 0.4% DRQN 6.63 8.4 ± 0.4% MGP-DQN 7.05 6.6 ± 0.4% MGP-mean-DQN 6.73 7.5 ± 0.4% DQN 6.09 10.6 ± 0.5% Table 1: Expected returns for the various policies considered. For the 6 reinforcement learning algorithms considered, we estimate their expected returns using an off-policy value evaluation algorithm. Using the from FIG0, we estimate the potential expected mortality reduction associated with each policy. We also qualitatively evaluate the of the policy from our best performing learning algorithm, the MGP-DRQN. In FIG2 we compare the number of times each type of action was actually taken by physicians, and how many times the learned policy selected that action. The MGP-DRQN policy tended to recommend more use of antibiotics and more vasopressors than were actually used by physician, while strangely recommending somewhat less use of IV fluids. In FIG3, we show how mortality rates differ on the test set as a function of how different the observed physician action was from what the MGP-DRQN would have recommended. For all 3 types of treatments, there appears to be a local minimum at 0 and we observe a V shape, indicating that empirically, mortality tended to be lowest when the clinicians took the same actions that the MGP-DRQN would have. Uncertainty tends to be higher due to smaller sample sizes for situations where there is larger disparity. After the patient is transferred to the Intensive Care Unit, their white blood cell count continues to rise (a sign of worsening infection) and their blood pressure continues to fall (a sign of worsening shock). By hour 14, the RL model starts and continues to recommend use of vasopressors to attempt to increase blood pressure, but they are not actually administered for about another 16 hours at hour 30. Ultimately, by hour 45 care was withdrawn and the patient passed away at hour 50. Cases such as this one illustrate the potential benefits of using our learned treatment policy in a decision support tool to recommend treatments to providers. If such a tool were used in this situation, it is possible that earlier treatments and more aggressive interventions might have ed in a different outcome. In this paper we presented a new framework combining multi-output Gaussian processes and deep reinforcement learning for clinical problems, and found that our approach performed well in estimating optimal treatment strategies for septic patients. The use of recurrent structure in the Q-network architecture yielded higher expected returns than a standard Q-network, accounting for the nonMarkovian nature of real-world medical data. The multi-output Gaussian process also improved performance by offering a more principled method for interpolation and imputation, and use of the full MGP posterior improved upon the from just using the posterior mean. In the future, we could include treatment recommendations from our learned policies into our dashboard application we have developed for early detection of sepsis. The treatment recommendations might help providers better care for septic patients after sepsis has been properly identified, and start treatments faster. There are many potential avenues for future work. One promising direction is to investigate the use of more complex reward functions, rather than the sparse rewards used in this work. More sophisticated rewards might take into account clinical targets for maintaining hemodynamic stability, and penalize an overzealous model that recommends too many unnecessary actions. Our modeling framework is fairly generalizable, and can easily be applied to other medical applications where there is a need for data-driven decision support tools. In future work we plan to use similar methods to learn optimal treatment strategies for treating patients with cardiogenic shock, and to learn effective insulin dosing regimes for patients on high-dose steroids.
We combine Multi-output Gaussian processes with deep recurrent Q-networks to learn optimal treatments for sepsis and show improved performance over standard deep reinforcement learning methods,
902
scitldr
Unsupervised and semi-supervised learning are important problems that are especially challenging with complex data like natural images. Progress on these problems would accelerate if we had access to appropriate generative models under which to pose the associated inference tasks. Inspired by the success of Convolutional Neural Networks (CNNs) for supervised prediction in images, we design the Neural Rendering Model (NRM), a new hierarchical probabilistic generative model whose inference calculations correspond to those in a CNN. The NRM introduces a small set of latent variables at each level of the model and enforces dependencies among all the latent variables via a conjugate prior distribution. The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN). We demonstrate that this regularizer improves generalization both in theory and in practice. Likelihood estimation in the NRM yields the new Max-Min cross entropy training loss, which suggests a new deep network architecture–the Max- Min network–which exceeds or matches the state-of-art for semi-supervised and supervised learning on SVHN, CIFAR10, and CIFAR100.
We develop a new deep generative model for semi-supervised learning and propose a new Max-Min cross-entropy for training CNNs.
903
scitldr
Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations. It enables trained agents to adapt to new domains by learning robust features invariant across varied and randomized environments. Furthermore, we consider an inference method based on the Monte Carlo approximation to reduce the variance induced by this randomization. We demonstrate the superiority of our method across 2D CoinRun, 3D DeepMind Lab exploration and 3D robotics control tasks: it significantly outperforms various regularization and data augmentation methods for the same purpose. Deep reinforcement learning (RL) has been applied to various applications, including board games (e.g., Go and Chess ), video games (e.g., Atari games and StarCraft ), and complex robotics control tasks . However, it has been evidenced in recent years that deep RL agents often struggle to generalize to new environments, even when semantically similar to trained agents (; b; ;). For example, RL agents that learned a near-optimal policy for training levels in a video game fail to perform accurately in unseen levels , while a human can seamlessly generalize across similar tasks. Namely, RL agents often overfit to training environments, thus the lack of generalization ability makes them unreliable in several applications, such as health care and finance . The generalization of RL agents can be characterized by visual changes , different dynamics , and various structures . In this paper, we focus on the generalization across tasks where the trained agents take various unseen visual patterns at the test time, e.g., different styles of s, floors, and other objects (see Figure 1). We also found that RL agents completely fail due to small visual changes 1 because it is challenging to learn generalizable representations from high-dimensional input observations, such as images. To improve generalization, several strategies, such as regularization (; b;) and data augmentation , have been proposed in the literature (see Section 2 for further details). In particular, showed that training RL agents in various environments generated by randomizing rendering in a simulator improves the generalization performance, leading to a better performance in real environments. This implies that RL agents can learn invariant and robust representations if diverse input observations are provided during training. However, their method is limited by requiring a physics simulator, which may not always be available. This motivates our approach of developing a simple and plausible method applicable to training deep RL agents. The main contribution of this paper is to develop a simple randomization technique for improving the generalization ability across tasks with various unseen visual patterns. Our main idea is to utilize random (convolutional) networks to generate randomized inputs (see Figure 1 (a)), and train RL agents (or their policy) by feeding them into the networks. Specifically, by re-initializing the parameters of random networks at every iteration, the agents are encouraged to be trained under a broad range of perturbed low-level features, e.g., various textures, colors, or shapes. We discover that the proposed idea guides RL agents to learn generalizable features that are more invariant in unseen environments (see Figure 3) than conventional regularization and data augmentation techniques. Here, we also provide an inference technique based on the Monte Carlo approximation, which stabilizes the performance by reducing the variance incurred from our randomization method at test time. We demonstrate the effectiveness of the proposed method on the 2D CoinRun game, the 3D DeepMind Lab exploration task , and the 3D robotics control task . For evaluation, the performance of the trained agents is measured in unseen environments with various visual and geometrical patterns (e.g., different styles of s, objects, and floors), guaranteeing that the trained agents encounter unseen inputs at test time. Note that learning invariant and robust representations against such changes is essential to generalize to unseen environments. In our experiments, the proposed method significantly reduces the generalization gap in unseen environments unlike conventional regularization and data augmentation techniques. For example, compared to the agents learned with the cutout data augmentation methods proposed by , our method improves the success rates from 39.8% to 58.7% under 2D CoinRun, the total score from 55.4 to 358.2 for 3D DeepMind Lab, and the total score from 31.3 to 356.8 for the Surreal robotics control task. Our can be influential to study other generalization domains, such as tasks with different dynamics , as well as solving real-world problems, such as sim-to-real transfer . Generalization in deep RL. Recently, the generalization performance of RL agents has been investigated by splitting training and test environments using random seeds (a) and distinct sets of levels in video games . Regularization is one of the major directions to improve the generalization ability of deep RL algorithms. and showed that regularization methods can improve the generalization performance of RL agents using various game modes of Atari and procedurally generated arcade environments called CoinRun, respectively. On the other hand, data augmentation techniques have also been shown to improve generalization. proposed a domain randomization method to generate simulated inputs by randomizing rendering in the simulator. Motivated by this, proposed a data augmentation method by modifying the cutout method . Our method can be combined with the prior methods to further improve the generalization performance. Random networks for deep RL. Random networks have been utilized in several approaches for different purposes in deep RL. utilized a randomly initialized neural network to define an intrinsic reward for visiting unexplored states in challenging exploration problems. By learning to predict the reward from the random network, the agent can recognize unexplored states. studied a method to improve ensemble-based approaches by adding a randomized network to each ensemble member to improve the uncertainty estimation and efficient exploration in deep RL. Our method is different because we introduce a random network to improve the generalization ability of RL agents. Transfer learning. Generalization is also closely related to transfer learning (; a; b), which is used to improve the performance on a target task by transferring the knowledge from a source task. However, unlike supervised learning, it has been observed that finetuning a model pre-trained on the source task for adapting to the target task is not beneficial in deep RL. proposed a domain transfer method using generative adversarial networks and utilized regularization techniques to improve the performance of fine-tuning methods. proposed a multi-stage RL, which learns to extract disentangled representations from the input observation and then trains the agents on the representations. Alternatively, we focus on the zero-shot performance of each agent at test time without further fine-tuning of the agent's parameters. We consider a standard reinforcement learning (RL) framework where an agent interacts with an environment in discrete time. Formally, at each timestep t, the agent receives a state s t from the environment 2 and chooses an action a t based on its policy π. The environment returns a reward r t and the agent transitions to the next state s t+1. The return R t = ∞ k=0 γ k r t+k is the total accumulated rewards from timestep t with a discount factor γ ∈. RL then maximizes the expected return from each state s t. We introduce a random network f with its parameters φ initialized with a prior distribution, e.g., Xavier normal distribution . Instead of the original input s, we train an agent using a randomized input s = f (s; φ). For example, in the case of policy-based methods, 3 the parameters θ of the policy network π are optimized by minimizing the following policy gradient objective function: where D = {(s t, a t, R t)} is a set of past transitions with cumulative rewards. By re-initializing the parameters φ of the random network per iteration, the agents are trained using varied and randomized input observations (see Figure 1(a) ). Namely, environments are generated with various visual patterns, but with the same semantics by randomizing the networks. Our agents are expected to adapt to new environments by learning invariant representation (see Figure 3 for supporting experiments). To learn more invariant features, the following feature matching (FM) loss between hidden features from clean and randomized observations is also considered: where h(·) denotes the output of the penultimate layer of policy π. The hidden features from clean and randomized inputs are combined to learn more invariant features against the changes in the input observations. 4 Namely, the total loss is: where β > 0 is a hyper-parameter. The full procedure is summarized in Algorithm 1 in Appendix B. ResNet-18 + ours 95.9 ± 1.6 84.4 ± 4.5 Table 1: The classification accuracy (%) on dogs vs. cats dataset. The show the mean and standard deviation averaged over three runs and the best is indicated in bold. Figure 2: Samples of dogs vs. cats dataset. The training set consists of bright dogs and dark cats, whereas the test set consists of dark dogs and bright cats. Details of the random networks. We propose to utilize a single-layer convolutional neural network (CNN) as a random network, where its output has the same dimension with the input (see Appendix F for additional experimental on the various types of random networks). To reinitialize the parameters of the random network, we utilize the following mixture of distributions: 2 nin+nout, where I is an identity kernel, α ∈ is a positive constant, N denotes the normal distribution, and n in, n out are the number of input and output channels, respectively. Here, clean inputs are used with the probability α because training only randomized inputs can complicate training. The Xavier normal distribution is used for randomization because it maintains the variance of the input s and the randomized input s. We empirically observe that this distribution stabilizes training. Removing visual bias. To confirm the desired effects of our method, we conduct an image classification experiment on the dogs and cats database from Kaggle. 5 Following the same setup as , we construct datasets with an undesirable bias as follows: the training set consists of bright dogs and dark cats while the test set consists of dark dogs and bright cats (see Appendix J for further details). A classifier is expected to make a decision based on the undesirable bias, (e.g., brightness and color) since CNNs are biased towards texture or color, rather than shape . Table 1 shows that ResNet-18 does not generalize effectively due to overfitting to an undesirable bias in the training data. To address this issue, several image processing methods , such as grayout (GR), cutout (CO; DeVries & Taylor 2017), inversion (IV), and color jitter (CJ), can be applied (see Appendix E for further details). However, they are not effective in improving the generalization ability, compared to our method. This confirms that our approach makes DNNs capture more desired and meaningful information such as the shape by changing the visual appearance of attributes and entities in images while effectively keeping the semantic information. Prior sophisticated methods require additional information to eliminate such an undesired bias, while our method does not. 6 Although we mainly focus on RL applications, our idea can also be explorable in this direction. Since the parameter of random networks is drawn from a prior distribution P (φ), our policy is modeled by a stochastic neural network: π(a|s; θ) = E φ π (a|f (s; φ); θ). Based on this interpretation, our training procedure (i.e., randomizing the parameters) consists of training stochastic models using the Monte Carlo (MC) approximation (with one sample per iteration). Therefore, at the inference or test time, an action a is taken by approximating the expectations as follows: and M is the number of MC samples. In other words, we generate M random inputs for each observation and then aggregate their decisions. The show that this estimator improves the performance of the trained agents by approximating the posterior distribution more accurately (see Figure 3(d) ). 5 https://www.kaggle.com/c/dogs-vs-cats 6 Using the known bias information (i.e., {dark, bright}) and ImageNet pre-trained model, achieve 90.3%, while our method achieves 84.4% without using both inputs. In this section, we demonstrate the effectiveness of the proposed method on 2D CoinRun , 3D DeepMind Lab exploration , and 3D robotics control task . To evaluate the generalization ability, we measure the performance of trained agents in unseen environments which consist of different styles of s, objects, and floors. Due to the space limitation, we provide more detailed experimental setups and in the Appendix. For CoinRun and DeepMind Lab experiments, similar to , we take the CNN architecture used in IMPALA as the policy network, and the Proximal Policy Optimization (PPO) method to train the agents. 7 At each timestep, agents are given an observation frame of size 64 × 64 as input (resized from the raw observation of size 320 × 240 as in the DeepMind Lab), and the trajectories are collected with the 256-step rollout for training. For Surreal robotics experiments, similar to , the hybrid of CNN and long short-term memory (LSTM) architecture is taken as the policy network, and a distributed version of PPO (i.e., actors collect a massive amount of trajectories, and the centralized learner updates the model parameters using PPO) is used to train the agents. 8 We measure the performance in the unseen environment for every 10M timesteps and report the mean and standard deviation across three runs. Our proposed method, which augments PPO with random networks and feature matching (FM) loss (denoted PPO + ours), is compared with several regularization and data augmentation methods. As regularization methods, we compare dropout (DO; Srivastava et al. 2014), L2 regularization (L2), and batch normalization (BN; Ioffe & Szegedy 2015). For those methods, we use the hyperparameters suggested in , which are empirically shown to be effective: a dropout probability of 0.1 and a coefficient of 10 −4 for L2 regularization. We also consider various data augmentation methods: a variant of cutout (CO; DeVries & Taylor 2017) proposed in , grayout (GR), inversion (IV), and color jitter (CJ) by adjusting brightness, contrast, and saturation (see Appendix E for more details). As an upper bound, we report the performance of agents trained directly on unseen environments, dented PPO (oracle). For our method, we use β = 0.002 for the weight of the FM loss, α = 0.1 for the probability of skipping the random network, M = 10 for MC approximation, and a single-layer CNN with the kernel size of 3 as a random network. Task description. In this task, an agent is located at the leftmost side of the map and the goal is to collect the coin located at the rightmost side of the map within 1,000 timesteps. The agent observes its surrounding environment in the third-person point of view, where the agent is always located at the center of the observation. CoinRun contains an arbitrarily large number of levels which are generated deterministically from a given seed. In each level, the style of , floor, and obstacles is randomly selected from the available themes (34 s, 6 grounds, 5 agents, and 9 moving obstacles). Some obstacles and pitfalls are distributed between the agent and the coin, where a collision with them in the agent's immediate death. We measure the success rates, which correspond to the number of collected coins divided by the number of played levels. Ablation study on small-scale environments. First, we train agents on one level for 100M timesteps and measure the performance in unseen environments by only changing the style of the , as shown in Figure 3 (a). Note that these visual changes are not significant to the game's dynamics, but the agent should achieve a high success rate if it can generalize accurately. However, Table 2 shows that all baseline agents fail to generalize to unseen environments, while they achieve a near-optimal performance in the seen environment. This shows that regularization techniques have no significant impact on improving the generalization ability. Even though data augmentation techniques, such as cutout (CO) and color jitter (CJ), slightly improve the performance, our proposed method is most effective because it can produce a diverse novelty in attributes and entities. Training with randomized inputs can degrade the training performance, but the high expressive power Embedding analysis. We analyze whether the hidden representation of trained RL agents exhibits meaningful abstraction in the unseen environments. The features on the penultimate layer of trained agents are visualized and reduced to two dimensions using t-SNE . Figure 3 shows the projection of trajectories taken by human demonstrators in seen and unseen environments (see Figure 8 in Appendix C for further ). Here, trajectories from both seen and unseen environments are aligned on the hidden space of our agents, while the baselines yield scattered and disjointed trajectories. This implies that our method makes RL agents capable of learning the invariant and robust representation. To evaluate the quality of hidden representation quantitatively, the cycle-consistency proposed in is also measured. Given two trajectories V and U, v i ∈ V first locates its nearest neighbor in the other trajectory u j = arg min u∈U h(v i) − h(u) 2, where h(·) denotes the output of the penultimate layer of trained agents. Then, the nearest neighbor of u j in V is located, i.e., v k = arg min v∈V h(v) − h(u j) 2, and v i is defined as cycle-consistent if |i − k| ≤ 1, i.e., it can return to the original point. Note that this cycle-consistency implies that two trajectories are accurately aligned in the hidden space. Similar to , we also evaluate the three-way cycle-consistency by measuring whether v i remains cycle-consistent along both paths, V → U → J → V and V → J → U → V, where J is the third trajectory. Using the trajectories shown in Figure 3 (a), Table 2 reports the percentage of input observations in the seen environment (blue curve) that are cycle-consistent with unseen trajectories (red and green curves). Similar to the shown in Figure 3 Visual interpretation. To verify whether the trained agents can focus on meaningful and highlevel information, the activation maps are visualized using Grad-CAM by averaging activations channel-wise in the last convolutional layer, weighted by their gradients. As shown in Figure 4, both vanilla PPO and our agents make a decision by focusing on essential objects, such as obstacles and coins in the seen environment. However, in the unseen environment, the vanilla PPO agent displays a widely distributed activation map in some cases, while our agent does not. As a quantitative metric, we measure the entropy of normalized activation maps. Specifically, we first normalize activations σ t,h,w ∈, such that it represents a 2D discrete probability distribution at timestep t, i.e., H h=1 W w=1 σ t,h,w = 1. Then, we measure the entropy averaged over the timesteps as follows: H h=1 W w=1 σ t,h,w log σ t,h,w. Note that the entropy of the activation map quantitatively measures the frequency an agent focuses on salient components in its observation. Results show that our agent produces a low entropy on both seen and unseen environments (i.e., 2.28 and 2.44 for seen and unseen, respectively), whereas the vanilla PPO agent produces a low entropy only in the seen environment (2.77 and 3.54 for seen and unseen, respectively). Results on large-scale experiments. Similar to , the generalization ability by training agents is evaluated on a fixed set of 500 levels of CoinRun. To explicitly separate seen and unseen environments, half of the available themes are utilized (i.e., style of s, floors, agents, and moving obstacles) for training, and the performances on 1,000 different levels consisting of unseen themes are measured. 9 As shown in Figure 5 (a), our method outperforms all baseline methods by a large margin. In particular, the success rates are improved from 39.8% to 58.7% compared to the PPO with cutout (CO) augmentation proposed in , showing that our agent learns generalizable representations given a limited number of seen environments. Results on DeepMind Lab. We also demonstrate the effectiveness of our proposed method on DeepMind Lab , which is a 3D game environment in the first-person point of view with rich visual inputs. The task is designed based on the standard exploration task, where a goal object is placed in one of the rooms in a 3D maze. In this task, agents aim to collect as many goal objects as possible within 90 seconds to maximize their rewards. Once the agent collects the goal object, it receives ten points and is relocated to a random place. Similar to the small-scale CoinRun experiment, agents are trained to collect the goal object in a fixed map layout and tested in unseen environments with only changing the style of the walls and floors. We report the mean and standard deviation of the average scores across ten different map layouts, which are randomly selected. Additional details are provided in Appendix I. Note that a simple strategy of exploring the map actively and recognizing the goal object achieves high scores because the maze size is small in this experiment. Even though the baseline agents achieve high scores by learning this simple strategy in the seen environment (see Figure 6 (c) in Appendix A for learning curves), Figure 5 (b) shows that they fail to adapt to the unseen environments. However, the agent trained by our proposed method achieves high scores in both seen and unseen environments. These show that our method can learn generalizable representations from high-dimensional and complex input observations (i.e., 3D environment). Results on Surreal robotics control. We evaluate our method in the Block Lifting task using the Surreal distributed RL framework : the Sawyer robot receives a reward if it succeeds to lift a block randomly placed on a table. We train agents on a single environment and test on five unseen environments with various styles of tables and blocks (see Appendix K for further details). Figure 5 (c) shows that our method achieves a significant performance gain compared to all baselines in unseen environments while maintaining its performance in the seen environment (see Figure 14 in Appendix K), implying that our method can maintain essential properties, such as structural spatial features of the input observation. Comparison with domain randomization. To further verify the effectiveness of our method, the vanilla PPO agents are trained by increasing the number of seen environments generated by randomizing rendering in a simulator, while our agent is still trained in a single environment (see Appendices I and K for further details). Table 3 shows that the performance of baseline agents can be improved with domain randomization . However, our method still outperforms the baseline methods trained with more diverse environments than ours, implying that our method is more effective in learning generalizable representations than simply increasing the (finite) number of seen environments. In this paper, we explore generalization in RL where the agent is required to generalize to new environments in unseen visual patterns, but semantically similar. To improve the generalization ability, we propose to randomize the first layer of CNN to perturb low-level features, e.g., various textures, colors, or shapes. Our method encourages agents to learn invariant and robust representations by producing diverse visual input observations. Such invariant features could be useful for several other related topics, like an adversarial defense in RL (see Appendix D for further discussions), sim-toreal transfer , transfer learning (; a; b), and online adaptation . We provide the more detailed discussions on an extension to the dynamics generalization and failure cases of our method in Appendix L and M, respectively. The adversarial (visually imperceptible) perturbation to clean input observations can induce the DNN-based policies to generate an incorrect decision at test time . This undesirable property of DNNs has raised major security concerns. In this section, we evaluate if the proposed method can improve the robustness on adversarial Algorithm 1 PPO + random networks, Actor-Critic Style for iteration= 1, 2, · · · do Sample the parameter φ of random networks from prior distribution P (φ) for actor= 1, 2, · · ·, N do Run policy π (a|f (s; φ); θ) in the given environment for T timesteps Compute advantage estimates end for Optimize L random in equation with respect to θ end for attacks. Our method is expected to improve the robustness against such adversarial attacks because the agents are trained with randomly perturbed inputs. To verify that the proposed method can improve the robustness to adversarial attacks, the adversarial samples are generated using FGSM by perturbing inputs to the opposite direction to the most probable action initially predicted by the policy: where ε is the magnitude of noise and a * = arg max a π(a|s; θ) is the action from the policy. Table 4 shows that our proposed method can improve the robustness against FGSM attacks with ε = 0.01, which implies that hidden representations of trained agents are more robust., two boxes are painted in the upper-left corner, where their color represents the x-and y-axis velocity to help the agents quickly learn to act optimally. In this way, the agent does not need to memorize previous states, so a simple CNN-based policy without LSTM can effectively perform in our experimental settings. Data augmentation methods. In this paper, we compare a variant of cutout proposed in , grayout, inversion, and color jitter . Specifically, the cutout augmentation applies a random number of boxes in random size and color to the input, the grayout method averages all three channels of the input, the inversion method inverts pixel values by a 50% chance, and the color jitter changes the characteristics of images commonly used for data augmentation in computer vision tasks: brightness, contrast, and saturation. For every timestep in the cutout augmentation, we first randomly choose the number of boxes from zero to five, assign them a random color and size, and place them in the observation. For the color jitter, the parameters for brightness, contrast, and saturation are randomly chosen in [0.5,1.5]. 10 For each episode, the parameters of these methods are randomized and fixed such that the same image preprocessing is applied within an episode. In this section, we apply random networks to various locations in the network architecture (see Figure 10) and measure the performance in large-scale CoinRun without the feature matching loss. For all methods, a single-layer CNN is used with a kernel size of 3, and its output tensor is padded in order to be in the same dimension as the input tensor. As shown in Figure 9, the performance of unseen environments decreases as the random network is placed in higher layers. On the other hand, the random network in residual connections improves the generalization performance, but it does not outperform the case when a random network is placed at the beginning of the network, meaning that randomizing only the local features of inputs can be effective for a better generalization performance. For small-scale CoinRun environments, we consider a fixed map layout with two moving obstacles and measure the performance of the trained agents by changing the style of the s (see Figure 11). Below is the list of seen and unseen s in this experiment: In CoinRun, there are 34 themes for s, 6 for grounds, 5 for agents, and 9 for obstacles. For the large-scale CoinRun experiment, we train agents on a fixed set of 500 levels of CoinRun using half of the available themes and measure the performances on 1,000 different levels consisting of unseen themes. Specifically, the following is a list of seen and unseen themes used in this experiment: • Seen s: • Unseen s: Dataset. The original database is a set of 25,000 images of dogs and cats for training and 12,500 images for testing. Similar to , the data is manually categorized according to the color of the animal: bright or dark. Biased datasets are constructed such that the training set consists of bright dogs and dark cats, while the test and validation sets contain dark dogs and bright cats. Specifically, training, validation, and test sets consist of 10,047, 1,000, and 5,738 images, respectively. 11 ResNet-18 is trained with an initial learning rate chosen from {0.05, 0.1} and then dropped by 0.1 at 50 epochs with a total of 100 epochs. We use the Nesterov momentum of 0.9 for SGD, a mini-batch size chosen from {32, 64}, and the weight decay set to 0.0001. We report the training and test set accuracies with the hyperparameters chosen by validation. , we do not use ResNet-18 pre-trained with ImageNet in order to avoid inductive bias from the pre-trained dataset. Our method is evaluated in the Block Lifting task using the Surreal distributed RL framework . In this task, the Sawyer robot receives a reward if it successfully lifts a block randomly placed on a table. Following the experimental setups in , the hybrid CNN-LSTM architecture (see Figure 14 (a)) is chosen as the policy network and a distributed version of PPO (i.e., actors collect massive amount of trajectories and the centralized learner updates the model parameters using PPO) is used to train the agents. 12 Agents take 84 × 84 observation frames with proprioceptive features (e.g., robot joint positions and velocities) and output the mean and log of the standard deviation for each action dimension. The actions are then sampled from the Gaussian distribution parameterized by the output. Agents are trained on a single environment and tested on five unseen environments with various styles of table, floor, and block, as shown in Figure 15. For the Surreal robot manipulation experiment, the vanilla PPO agent is trained on 25 environments generated by changing the styles of tables and boxes. Specifically, we use {blue, gray, orange, white, purple} and {red, blue, green, yellow, cyan} for table and box, respectively. In this section, we consider an extension to the generalization on domains with different dynamics. Similar to dynamics randomization , one can expect that our idea can be useful for improving the dynamics generalization. To verify this, we conduct an experiment on CartPole and Hopper environments where an agent takes proprioceptive features (e.g., positions and velocities). The goal of CartPole is to prevent the pole from falling over, while that of Hopper is to make an onelegged robot hop forward as fast as possible, respectively. Similar to the randomization method we applied to visual inputs, we introduce a random layer between the input and the model. As a natural extension of the proposed method, we consider performing the convolution operation by multiplying a d × d diagonal matrix to d-dimensional input states. For every training iteration, the elements of the matrix are sampled from the standard uniform distribution U (0.8, 1.2). One can note that this method can randomize the amplitude of input states while maintaining the intrinsic information (e.g., sign of inputs). method is used to train the agents. The mass of the training environment is sampled from {1.0, 2.0, 3.0, 4.0, 5.0}, while it is sampled from {6.0, 7.0, 8.0} during testing. 14 Figure 16 reports the mean and standard deviation across 3 runs. Our simple randomization improves the performance of the agents in unseen environments, while achieving performance comparable to seen environments. We believe that this evidences a wide applicability of our idea beyond visual changes. In this section, we verify whether the proposed method can handle color (or texture)-conditioned RL tasks. One might expect that such RL tasks can be difficult for our methods to work because of the randomization. For example, our methods would fail if we consider an extreme seek-avoid object gathering setup, where the agent must learn to collect good objects and avoid bad objects which have the same shape but different color. However, we remark that our method would not always fail for such tasks if other environmental factors (e.g., the shape of objects in Collect Good Objects in DeepMind Lab ) are available to distinguish them. To verify this, we consider a modified CoinRun environment where the agent must learn to collect good objects (e.g., gold coin) and avoid bad objects (e.g., silver coin). Similar to the small-scale CoinRun experiment, agents are trained to collect the goal object in a fixed map layout (see Figure 17 (a)) and tested in unseen environments with only changing the style of the . Figure 17 (b) shows that our method can work well for such color-conditioned RL tasks because a trained agent can capture the other factors such as a location to perform this task. Besides, our method achieves a significant performance gain compared to vanilla PPO agent in unseen environments as shown in Figure 17 (c). As another example, in color-matching tasks such as the keys doors puzzle in DeepMind Lab , the agent must collect colored keys to open matching doors. Even though this task is color-conditioned, a policy trained with our method can perform well because the same colored objects will have the same color value even after randomization, i.e., our randomization method still maintains the structure of input observation. This evidences the wide applicability of our idea. We also remark that our method can handle more extreme corner cases by adjusting the fraction of clean samples during training. In summary, we believe that the proposed method covers a broad scope of generalization across low-level transformations in the observation space features. We investigate the effect of the fraction of clean samples during training. As shown in Figure 17 (d), the best unseen performance is achieved when the fraction of clean samples is 0.1 on large-scale CoinRun.
We propose a simple randomization technique for improving generalization in deep reinforcement learning across tasks with various unseen visual patterns.
904
scitldr
Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of augmented input in mobile interactions. Mobile touchscreen devices such as smartphones, tablets, and smartwatches are now ubiquitous. The simplicity of touch-based interaction is one of the main reasons for their popularity, but touch interfaces have low expressivenessthey are limited in terms of the number of actions that the user can produce in a single input. As a , touch interactions often involve additional actions to choose modes or to navigate menu hierarchies. These limitations on touch input can be addressed by adding new degrees of freedom to touch devices. For example, both Android and IOS devices have augmentations that allow the user to specify the difference between scrolling and selecting: Android uses a timeout on the initial touch (i.e., a drag starts with either a short press or a long press), and some IOS devices use pressure-sensitive screens that use different pressure levels to specify selection and scrolling. Researchers have also proposed adding a wide variety of new degrees of freedom for touch devices -including multi-touch and bimanual input, external buttons and force sensors, back-of-device touch, sensors for pen state or screen tilt, and pressure sensors. Studies have shown these additional degrees of freedom to be effective at increasing the expressive power of interaction with a mobile device. However, previous research has only looked at these new degrees of freedom in single contexts, and as a we know little about how augmented input will work when it is used in multiple different applications: if an augmented input is mapped to a set of actions that are specific to one application, will there be interference when the same augmentations are mapped to a different set of actions in another application? To find out how multiple mappings for a new degree of freedom affect learning and usage, we carried out a study with a prototype device that provides three buttons on the side of a smartphone case. The buttons can be chorded, giving seven inputs that can be used for discrete commands or transient modes. We developed three different mappings for these chording buttons for three different contexts: shortcuts for a launcher app, colour selections for a drawing app; and modes for a text-editing app. Our study looked at three issues: first, whether learning multiple mappings with the chorded buttons would interfere with learning or accuracy; second, whether people could transfer their learning from training to usage tasks that set the button commands into more complex and realistic activities; and third, whether memory of the multiple mappings would be retained over one week, without any intervening practice. Our evaluation provide new insights into the use of augmented input for mobile devices:  Learning multiple mappings did not reduce performance -people were able to learn all three mappings well, and actually learned the second and third mappings significantly faster than the first;  Multiple mappings did not reduce accuracy -people were as accurate on a memory test with three mappings as they were when learning the individual mappings;  Performance did transfer from training to more realistic usage tasks, although accuracy decreased slightly;  Retention after one week was initially poor (accuracy was half that of the first session), but performance quickly returned to near-expert levels. Our work provides two main contributions. First, we show that chorded input is a successful way to provide a rich input vocabulary that can be used with multiple applications. Second, we provide empirical evidence that mapping augmented input to multiple contexts does not impair performance. Our provide new evidence that augmented input can realistically increase the expressive power of interactions with mobile devices. ), and researchers have proposed new paradigms of interaction (e.g., for eyes-free ubiquitous computing or for post-WIMP devices ) that can incorporate different types of augmentation. Cechanowicz and colleagues also created a framework specifically about augmented interactions; they suggest several ways of adding to an interaction, such as adding states to a discrete degree of freedom, adding an entirely new degree of freedom, or "upgrading" a discrete degree of freedom to use continuous input. Chorded input for text entry has existed for many years (e.g., stenographic machines for court reporters, or Engelbart and English's one-hand keyboard in the NLS system ). Researchers have studied several issues in chorded text input, including performance, learning, and device design. A longitudinal study of training performance with the Twiddler one-handed keyboard showed that users can learn chorded devices and can gain a high level of expertise. The study had 10 participants train for 20 sessions of 20 minutes each; showed that by session eight, chording was faster than the multi-tap technique, and that by session 20, the mean typing speed was 26 words per minute. Five participants who continued the study to 25 hours of training had a mean typing speed of 47 words per minute. Because this high level of performance requires substantial training time, researchers have also looked at ways of reducing training time for novices. For example, studies have investigated the effects of using different types of phrase sets in training, and the effects of feedback. Several chording designs have been demonstrated for text entry on keypad-style mobile phones. The ChordTap system added external buttons to the phone case; to type a letter, the dominant hand selects a number key on the phone (which represents up to four letters) and the non-dominant hand presses the chording keys to select a letter within the group. A study showed that the system was quickly learned, and outperformed multi-tap from the second block of trials. A similar system used three of the keypad buttons to select the letter within the group, allowing chorded input without external buttons. The TiltText prototype used the four directions of the phone's tilt sensor to choose a letter within the group. Other types of chording have also been seen in multi-touch devices, where combinations of fingers are used to indicate different states to the system. Several researchers have looked at multi-touch input for menu selection. For example, finger-count menus use the number of fingers in two different areas of the touch surface to indicate a category (with the left hand) and an item within that menu (with the right hand). Two-handed marking menus also divide the screen into left and right sections, with a stroke on the left side selecting the submenu and a stroke on the right selecting the item. Multitouch marking menus combine these two approaches, using visionbased finger identification to increase the number of possible combinations. Each multi-finger chord indicates which menu is to be displayed, and the subsequent direction in which the touch points are moved indicates the item to be selected. HandMarks is a bimanual technique that uses the left hand on the surface as a reference frame for selecting menu items with the right hand. The FastTap system uses chorded multiple touches to indicate both that the menu mode is active and the selection of an item from a grid menu. Other kinds of chording have also been investigated with touch devices. The BiTouch system was a general-purpose technique that allowed touches from the supporting hand to be used in conjunction with touches from the dominant hand. Olafsdottir and Appert developed a taxonomy of multi-touch gestures (including chords), and Ghomi and colleagues developed a training technique for learning multi-touch chords. Finally, multi-finger input on a phone case was also shown by Wilson and Brewster, who developed a prototype with pressure sensors under each finger holding the phone; input could involve single fingers or combinations of fingers (with pressure level as an added degree of freedom). Researchers have also developed touch devices and techniques that involve other types of additional input, including methods for combining pen input with touch, using vocal input, using the back of the device as well as the front, using tilt state with a directional swipe on the touch surface to create an input vocabulary, or using a phone's accelerometers to enhance touch and create both enhanced motion gestures (e.g., one-handed zooming by combining touch and tilt), and more expressive touch. Enhanced input can also address issues with interface modes, which are often considered to be a cause of errors. Modes can be persistent or "spring loaded" (also called quasimodes ); these are active only when the user maintains a physical action (e.g., holding down a key), and this kinesthetic feedback can help people remember that they are in a different mode. When interfaces involve persistent modes, several means for switching have been proposed. For example, Li and colleagues compared several mode-switch mechanisms for changing from inking to gesturing with a stylus: a pen button, a separate button in the non-dominant hand, a timeout, pen pressure, and the eraser end of the pen. They found that a button held in the other hand was fastest and most preferred, and that the timeout was slow and error prone. Other researchers have explored implicit modes to allow fluid specification of a mode without an explicit switch. For example, Chu and colleagues created pressuresensitive "haptic conviction widgets" that allow either normal or forceful interaction to indicate different levels of confidence. Similarly, some IOS devices use touch pressure to differentiate between actions such as selection and scrolling. Many techniques add new sensing capabilities to create the additional modes -for example, pressure sensors have also been used to enhance mouse input and pen-based widgets; three-state switches were added to a mouse to create pop-through buttons; and height sensing was used to enable different actions in different height layers (e.g., the hover state of a pen, or the space above a digital table ). Other techniques simply use existing sensing that is currently unused in an interaction. For example, OrthoZoom exploits the unused horizontal dimension in a standard scrollbar to add zooming (by moving the pointer left or right while scrolling). Despite the wide range of work that has been carried out in this area, there is relatively little research on issues of interference, transfer, or retention for these augmented interfaces -particularly with multiple mappings. The study below provides initial baseline information for these issuesbut first, we describe the design and construction of the prototype that we used as the basis for our evaluation. In order to test learning, interference, and retention, we developed a prototype system that adds three hardware buttons to a custom-printed phone case and makes the state of those buttons available to applications. We designed and 3D-printed a bumper-style case for an Android Nexus 5 phone, with a compartment mounted on the back to hold the circuit boards from three Flic buttons (Bluetooth LE buttons made by Shortcut Labs). The Flic devices can be configured to perform various predetermined actions when pressed; Shortcut Labs also provides an Android API for using the buttons with custom software. We removed the PCBs containing the Bluetooth circuitry, and soldered new buttons to the PCBs (Figure 1). The new pushbuttons are momentary switches (i.e., they return to the "off" state when released) with 11mm-diameter push surfaces and 5mm travel. We tested several button styles and sizes, in order to find devices that were comfortable to push, that provided tactile feedback about the state of the press, and that were small enough to fit under three fingers. This design allows us to use the Flic Bluetooth events but with buttons that can be mounted closer together. The new buttons do not require any changes to our use of the API. The prototype is held as a normal phone with the left hand, with the index, middle, and ring fingers placed on the pushbuttons (Figure 2). The pushbuttons are stiff enough that these three fingers can also grip the phone without engaging the buttons; the fifth finger of the left hand can be placed comfortably on the phone case, adding stability when performing chorded button combinations. Finally, we note that the button housing on our prototype was larger than would be required by a commercial device; we estimate that the hardware could easily be built into a housing that is only marginally larger than a typical phone case. The prototype worked well in the study sessions described below. No participant complained of fatigue or difficulty pressing the buttons (although we observed a few difficulties matching the timeout period, as described below). The phone case was easy to hold, and the button positions were adequate for the hand sizes of our participants. Pressing the buttons in chords did not appear to cause difficulty for any participant (although with some timing issues, as described later). We wrote a simple wrapper library for Android to attach callback functions to the buttons through the Flic API. Android applications can poll the current combined state of the buttons through this library wrapper. Callback functions attached through the wrapper library are put on a short timer, allowing time for multiple buttons to be depressed before executing the callback. In all the applications we created, we assigned a single callback function to all the buttons; this function checks the state of all buttons and determines the appropriate behavior based on the combined state. Identifying chords represents an interpretation problem for any input system. When only individual buttons can be pressed, software can execute actions as soon as the signal has been received from any button. When chorded input is allowed, however, this method is insufficient, because users do not press all of the buttons of a chord at exactly the same time. Therefore, we implemented a 200ms wait time before processing input after an initial button signal -after this delay, the callback read the state of all buttons, and reported the combined pattern (i.e., a chord or a single press). Once an input is registered, all buttons must return to their "off" states before another input can be produced. With three buttons, the user can specify eight states -but in our applications, we assume that there is a default state that corresponds to having no buttons pressed. This approach prevents the user from having to maintain pressure on the buttons during default operation. We carried out a study of our chording system to investigate our three main research questions:  Interference: does learning additional mappings with the same buttons reduce learning or accuracy?  Transfer: is performance maintained when users move from training to usage tasks that set the button commands into more realistic activities?  Retention: does memory of the command mappings persist over one week (without intervening practice)? To test whether learning multiple mappings interferes with learning rate or accuracy, we created a training application to teach three different mappings to participants: seven application shortcuts (Apps), seven colors (Colors), and seven text-editing commands (Text) (Table 1). Participants learned the mappings one at a time, as this fits the way that users typically become expert with one application through frequent use, then become expert with another. To further test interference, after all mappings were learned we gave participants a memory test to determine whether they could remember individual commands from all of the mappings. This memory test corresponds to scenarios in which the user switches between applications and must remember different mappings at different times. To test whether the mappings learned in the training system would transfer, we asked participants to use two of the mappings in simulated usage tasks. Colors were used in a drawing program where participants were asked to draw shapes in a particular line color, and Text commands were used in a simple editor where participants were asked to manipulate the formatting of lines of text. To test retention, we recruited a subset of participants to carry out the memory test and the usage tasks a second time, one week after the initial session. Participants were not told that they would have to remember the mappings, and did not practice during the intervening week. The first part of the study had participants learn and practice the mappings over ten blocks of trials. The system displayed a target item on the screen, and asked the user to press the appropriate button combination for that item (see Figure 2). The system provided feedback about the user's selection (Figure 2, bottom of screen); when the user correctly selected the target item, the played a short tone, and the system moved on to the next item. Users could consult a dialog that displayed the entire current mapping but had to close the dialog to complete the trial. The system presented each item in the seven-item mapping twice per block (sampling without replacement), and continued for ten blocks. The same system was used for all three mappings, and recorded selection time as well as any incorrect selections (participants continued their attempts until they selected the correct item). We created two applications (Drawing and TextEdit) to test usage of two mappings in larger and more complex activities. Drawing. The Drawing application (Figure 3) is a simple paint program that uses the chord buttons to control line color (see Table 1). The application treated the button input as a set of spring-loaded modes -that is, the drawing color was set based on the current state of the buttons, and was unset when the buttons were released. For example, to draw a red square as shown in Figure 3, users held down the first button with their left hand and drew the square with their right hand; when the button was released, the system returned to its default mode (where touch was interpreted as panning). If the user released the buttons in the middle of a stroke, the line colour changed back to default grey. For each task in the Drawing application, a message on the screen asked the participant to draw a shape in a particular color. Tasks were grouped into blocks of 14, with each color appearing twice per block. A task was judged to be complete when the participant drew at least one line with the correct color (we did not evaluate whether the shape was correct, but participants did not know this). Participants completed three blocks in total. TextEdit. The TextEdit application asked users to select lines of text and apply manipulations such as cutting and pasting the text, setting the style (bold or italic), and increasing or decreasing the font size. Each of these six manipulations was mapped to a button combination. The seventh action for this mapping was used for selection, implemented as a springloaded mode that was combined with a touch action. We mapped selection to the combination of all three buttons since selection had to be carried out frequently -and this combination was easy to remember and execute. For each TextEdit task, the lines on the screen told the user what manipulations to make to the text (see Figure 4). Each task asked the participant to select some text and then perform a manipulation. There were six manipulations in total, and we combined copy and paste into a single task, so there were five tasks. Tasks were repeated twice per block, and there were four blocks. Tasks were judged to be correct when the correct styling was applied; if the wrong formatting was applied, the user had to press an undo button to reset the text to its original form, and perform the task again. The third stage of the study was a memory test that had a similar interface to the learning system described above. The system gave prompts for each of the 21 commands in random order (Apps, Colors, and Text were mixed together, and sampled without replacement). Participants pressed the button combination for each prompt, but no feedback was given about what was selected, or whether their selection was correct or incorrect. Participants were only allowed to answer once per prompt, and after each response the system moved to the next item. To determine participants' retention of the mappings, after the study was over we recruited 8 of the 15 participants to return to the lab after one week to carry out the memory test and the usage tasks again (two blocks of each of the drawing and text tasks). Participants were not told during the first study that they would be asked to remember the mappings beyond the study; participants for the one-week follow-up were recruited after the initial data collection was complete. The usage and memory tests operated as described above. After completing an informed consent form and a demographics questionnaire, participants were shown the system and introduced to the use of the external buttons. Participants were randomly assigned to a mapping-order condition (counterbalanced using a Latin square), and then started the training tasks for their first mapping. Participants were told that both time and accuracy would be recorded but were encouraged to use their memory of the chords even if they were not completely sure. After the Color and Text mappings, participants also completed the usage tasks as described above (there was no usage task for the Apps mapping). After completing the learning and tasks with each mapping, participants filled out an effort questionnaire based on the NASA-TLX survey. After all mappings, participants completed the memory test. For the retention test, participants filled out a second consent form, then completed the memory test with no assistance or reminder of the mappings. They then carried out two blocks of each of the usage tasks (the Drawing and TextEdit apps had the same order as in the first study). Fifteen participants were recruited from the local university community (8 women, 7 men, mean age 28.6). All participants were experienced with mobile devices (more than 30min/day average use). All but one of the participants was right-handed, and the one left-handed participant stated that they were used to operating mobile devices in a righthanded fashion. The study used the chording prototype described above. Sessions were carried out with participants seated at a desk, holding the phone (and operating the chording buttons) with their left hands. The system recorded all performance data; questionnaire responses were entered on a separate PC. The main study used two 3x10 repeated-measures designs. The first looked at differences across mappings, and used factors Mapping (Apps, Colors, Text) and Block. The second looked at interference by analyzing differences by the position of the mapping in the overall sequence, and used factors Position (first, second, third) and Block. For the memory tests, we used a 21x3x7 design with several planned comparisons; factors were Item (the 21 items shown in Table 1), Pattern (the 7 button patterns shown in column 2 of Table 1), and Mapping (Apps, Colors, Text). Dependent variables were selection time, accuracy (the proportion of trials where the correct item was chosen on the first try), and errors. No outliers were removed from the data. In the following analyses, significant ANOVA report partial etasquared as a measure of effect size (where .01 can be considered small, .06 medium, and > .14 large ). We organize the below around the main issues under investigation: training performance when learning three different mappings, interference effects, transfer performance, and retention after one week..09), with no interaction (F18,252=0.71, p=0.78). Overall error rates (i.e., the total number of selections per trial, since participants continued to make selections until they got the correct answer) for all mappings were low: 0.25 errors / selection for Apps, 0.26 for Colors, and 0.24 for Text. During the sessions we identified a hardware-based source of error that reduced accuracy. The 200ms timeout period in some cases caused errors when people held the buttons for the wrong period of time, when the Bluetooth buttons did not transmit a signal fast enough, or when people formed a chord in stages. This issue contributes to the accuracy rates shown above: our observations indicate that people had the button combinations correctly memorized but had occasional problem in producing the combination with the prototype. We believe that this difficulty can be fixed by adjusting our timeout values and by using an embedded microprocessor to read the button states (thus avoiding Bluetooth delay). Perceived Effort. Responses to the NASA-TLX effort questionnaire are shown in Figure 7; overall, people felt that all of the mappings required relatively low effort. Friedman rank sum tests showed only one difference between mappings -people saw themselves as being less successful with the Apps mapping (χ 2 =7, p=0.030). In the end-of-session questionnaire, 12 participants stated that Colors were easiest to remember (e.g., one person stated "colours were easier to remember" and another said that "memorizing the colours felt the easiest"). To determine whether learning a second and third mapping would be hindered because of the already-memorized mappings, we analysed the performance data based on whether the mapping was the first, second, or third to be learned. Figure 9 shows selection time over ten blocks for the first, second, and third mappings (the specific mapping in each position was counterbalanced). Selection time. A 3x10 RM-ANOVA looked for effects of position in the sequence on selection time. We did find a significant main effect of Position (F2,28=19.68, p<0.0001, η 2 =0.22), but as shown in Figure 8, the second and third mappings were actually faster than the first mapping. Both subsequent mappings were faster than the first; follow-up ttests with Bonferroni correction show that these differences are significant, p<0.01). The difference was more obvious in the early blocks (indicated by a significant interaction between Position and Block, F18,252=4.63, p<0.0001, η 2 =0.14). These findings suggest that adding new mappings for the same buttons does not impair learning or performance for subsequent mappings. Accuracy. We carried out a similar 3x10 RM-ANOVA to look for effects on accuracy (Figure 9). As with selection time, performance was worse with the first mapping (accuracy 0.8) than the second and third mappings (0.85 and 0.86). ANOVA showed a main effect of Position on accuracy (F2,28=7.18, p=0.003, η 2 =0.072), but with no Position x Block interaction (F18,252=1.20, p=0.051). The third stage of the study was the memory test, in which participants selected each of the 21 commands from all three mappings, in random order. Participants answered once per item with no feedback. The overall accuracy was 0.87 (0.86 for Apps, 0.86 for Colors, and 0.89 for Text); see Figure 11. Note that this accuracy is higher that seen with the individual mappings during the training sessions. To determine whether there were differences in accuracy for individual items, mappings, or button patterns, we carried out a 21x3x7 (Item x Mapping x Pattern) RM-ANOVA. We found no significant effects of any of these factors (for Item, F20,240=1.55, p=0.067; for Mapping: F2,24=0.43, p=0.65; for Pattern, F6,12=0.0004, p=0.99), and no interactions. Figure 10 (note that the text task had four blocks, and the drawing task had three blocks). Accuracy in the usage tasks ranged from 0.7 to 0.8 across the trial blocks -slightly lower than the 0.8-0.9 accuracy seen in the training stage of the study. It is possible that the additional mental requirements of the task (e.g., determining what to do, working with text, drawing lines) disrupted people's memory of the mappings -but the overall difference was small. The one-week followup asked eight participants to carry out the memory test and two blocks of each of the usage tasks, to determine whether participants' memory of the mappings had persisted without any intervening practice (or even any knowledge that they would be re-tested). Overall, the followup showed that accuracy decayed substantially over one week -but that participants quickly returned to their previous level of expertise once they started the usage tasks. In the memory test, overall accuracy dropped to 0.49 (0.43 for Apps, 0.50 for Colors, and 0.54 for Text), with some individual items as low as 10% accuracy. Only two items maintained accuracy above 0.85 -"Red" and "Copy". The two usage tasks (Drawing and Text editing) were carried out after the memory test, and in these tasks, participant accuracy recovered considerably. In the first task (immediately after the memory test), participants had an overall 0.60 accuracy in selection; and by the second block, performance rose to accuracy levels similar to the first study (for Drawing, 0.82; for Text, 0.70). This follow-up study is limited -it did not compare retention when learning only one mapping, so it is impossible to determine whether the decay arose because of the number of overloaded mappings learned in the first study. However, the study shows that retention is an important issue for designers of chorded memory-based techniques. With only a short training period (less than one hour for all three mappings) appears to be insufficient to ensure retention after one week with no intervening practice; however, in an ecological context users would likely use the chords more regularly. In addition, participants' memory of the mappings was restored after only a few minutes of use. Our study provides several main findings:  The training phase showed that people were able to learn all three mappings quickly (performance followed a power law), and were able to achieve 90% accuracy within a few training blocks;  Overloading the buttons with three different mappings did not appear to cause any problems for participantsthe second and third mappings were learned faster than the first, and there was no difference in performance across the position of the learned mappings;  People were able to successfully transfer their expertise from the training system to the usage tasks -although performance dropped by a small amount;  Performance in the memory test, which mixed all three mappings together, was very strong, with many of the items remembered at near 100% accuracy;  Retention over one week without any intervening practice was initially poor (about half the accuracy of the first memory test), but recovered quickly in the usage tasks to near the levels seen in the first sessions. In the following paragraphs we discuss the reasons for our , and comment on how our findings can be generalized and used in the design of richer touch-based interactions. People's overall success in learning to map twenty-one total items to different button combinations is not particularly surprising -evidence from other domains such as chording keyboards suggests that with practice, humans can be very successful in this type of task. It is more interesting, however, that these 21 items were grouped into three overloaded sets that used the same button combinations -and we did not see any evidence of interference between the mappings. One reason for people's success in learning with multiple button mappings may be that the contexts of the three mappings were quite different, and there were few conceptual overlaps in the semantics of the different groups of items (e.g., colors and application shortcuts are quite different in the ways that they are used). Even if this is true, however, there are likely many opportunities in mobile device use where this type of clean separation of semantics occurs -suggesting that overloading can be used to substantially increase the expressive power of limited input. People were also reasonably successful in using the learned commands in two usage tasks. This success shows that moving to more realistic tasks does not substantially disrupt memories built up during a training exercise -although it is likely that the added complexity of the tasks caused the small observed reduction in accuracy compared to training. The overall difference between the training and usage environments was relatively small, however; more work is needed to examine transfer effects to real-world use. Finally, the decay in memory of the mappings over one week may simply be an effect of the human memory system -our training period was short example, Ebbinghaus's early studies on "forgetting curves" show approximately similar decay to what we observed. It is likely that in real-world settings, the frequency of users' mobile phone use would have provided intervening practice that would have maintained users' memory -but this must be studied in greater detail in future. Limitations and opportunities for future work The main limitations of our work are in the breadth and realism of our evaluations, and in the physical design of the prototype. First, although our work takes important steps towards ecological validity for augmented input, our study was still a controlled experiment. We designed the study to focus on real-world issues of interference, transfer, and retention but the realism of our tasks was relatively low. Therefore, a critical area for further work is in testing our system with real tasks in real-world settings. The Flic software allows us to map button inputs to actions in real Android applications, so we plan to have people use the next version of the system over a longer time period and with their own actual applications. Second, it is clear that additional engineering work can be done to improve both the ergonomics and the performance of the prototype. The potential errors introduced by our 200ms timeout are a problem that can likely be solved, but the timeout caused other problems as well -once participants were expert with the commands, some of them felt that holding the combination until the application registered the command slowed them down. Adjusting the timeout and ensuring that the system does not introduce additional errors is an important area for our future work. We also plan to experiment with different invocation mechanisms (e.g., selection on button release) and with the effects of providing feedback as the chord is being produced. An additional opportunity for future work that was identified by participants during the study is the potential use of external chorded buttons as an eyes-free input mechanism. The button interface allows people to change input modes without shifting their visual attention from the current site of work, and also allows changing tools without needing to move the finger doing the drawing (and without occluding the workspace with menus or toolbars). Expressiveness is limited in mobile touch interfaces. Many researchers have devised new ways of augmenting these interactions, but there is still little understanding of issues of interference, transfer, and retention for augmented touch interactions, particularly those that use multiple mappings for different usage contexts. To provide information about these issues, we developed an augmented phone case with three pushbuttons that can be chorded to provide seven input states. The external buttons can provide quick access to command shortcuts and transient modes, increasing the expressive power of interaction. We carried out a four-part study with the system, and found that people can successfully learn multiple mappings of chorded commands, and can maintain their expertise in more-complex usage tasks. Retention was found to be an important issue -accuracy dropped over one week, but was quickly restored after a short period of use. Our work provides new knowledge about augmented interactions for touch devices, and shows that adding simple input mechanisms like chorded buttons are a promising way to augment mobile interactions.
Describes a study investigating interference, transfer, and retention of multiple mappings with the same set of chorded buttons
905
scitldr
Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training. Natural language processing (NLP) widely relies on neural networks, a model class known to be vulnerable to adversarial input perturbations . Adversarial samples typically expose over-sensitivity to semantically invariant text transformations , e.g. character flips or paraphrases (b; . exposed another type of problematic behavior: deleting large parts of input text can cause a model's confidence to increase; Figure 1 shows an example. That is, reduced sets of input words can suffice to trigger more confident predictions. Such under-sensitivity is problematic: neural models can 'solve' NLP tasks without task-relevant textual comprehension skills, but instead fit spurious cues in the data that suffice to form correct predictions. Models might then achieve strong nominal test accuracy on data of the same (biased) distribution as the training set, by exploiting predictive shortcuts that are not representative of the given NLP task at hand. Consequently, they fail drastically when evaluated on samples without these spurious cues (; ; ;). A major issue with identifying reduced inputs is the combinatorially large space of arbitrary text deletions; this can only be searched exhaustively for short sequences. Prior work has considered heuristics like beam search or bandits (a), but these are generally not guaranteed to find the worst-case reductions. In this work, we address the under-sensitivity issue by designing and formally verifying the undersensitivity specification that a model should not become more confident as arbitrary subsets of input words are deleted. 1 Under-sensitivity behaviour is not reflected in nominal accuracy, but one can instead use this specification to measure and evaluate the extent with which samples exhibit undersensitivity. Instead of better, yet still imperfect search heuristics, we describe how interval bound propagation (IBP) -a formal model verification methodcan be used to efficiently cover the full reduction space, and verify the under-sensitivity specification. IBP can be applied at test time to arbitrary model inputs to verify whether or not they are undersensitive; but it can also be used to derive a new auxiliary training objective that leads to models verifiably adhering to this specification, and which we find generalizes to held-out test data. While under-sensitivity has been demonstrated for several NLP tasks , we chose to study the use case of natural language inference (NLI) in particular as a representative task: sequences are comparatively short, datasets large, and the label complexity is small. We investigate the verification of the popular decomposable attention model (DAM) 2 in detail. This architecture covers many of the neural layer types of contemporary models, and we focus on a detailed description for how IBP can be leveraged to efficiently verify its behaviour. We then experimentally compare various training methods addressing under-sensitivity: i) standard training ii) data augmentation iii) adversarial training iv) IBP-verified training and v) entropy regularization, and evaluate their effectiveness against nominal (test) accuracy, adversarial accuracy, IBP-verified accuracy and a verification oracle. To summarise, the main contributions of this paper are Formalization of the problem of verifying an under-sensitivity specification, Verification of the Decomposable Attention Model using Interval Bound Propagation, and Empirical analysis of the efficacy of (i) different evaluation methods for verifying robustness; and (ii) different training methods for developing verifiably robust models. Natural Language Inference. Natural Language Inference is the task of predicting whether a natural language premise entails a natural language hypothesis. The availibility of large-scale datasets has spurred a profusion of neural architecture development for this task, e.g. (Rocktäschel et al., 2016; ;), among many others. Adversarial Vulnerability in NLP. There is a growing body of research into NLP adversarial examples, each using a slightly different choice of semantically invariant text transformations, or a task-specific attack. A first class of attack considers word-and character-level perturbation attacks while another type of attack exploits back-translation systems to either mine rules (b) or train syntactically controlled paraphrasing models. use syntactic and lexical transformations, whereas investigate synthetic and natural noise in Machine Translation. and introduce task-specific adversarial attacks for Reading Comprehension/QA, for for Fact Checking. In NLI in particular, penalize adversarially chosen logical inconsistencies in NLI predictions, use -knowledge guided adversaries, and showing that models can become more confident as large fractions of input text are deleted, whereas address under-sensitivity in a dialogue setting. demonstrated a link between excessive prediction invariance and model vulnerability in computer vision. Formal Verification. Formal verification provides a provable guarantee that models are consistent with a formally defined specification (a mathematical relationship between the inputs and outputs of the model). Examples of specifications include robustness to bounded adversarial perturbations, monotonicity of the output wrt a subset of the inputs, and consistency with physical laws . Literature can be categorised into complete methods that use Mixed-Integer Programming (MIP) or Satisfiability Modulo Theory (SMT) , and incomplete methods that solve a convex relaxation of the verification problem (; ; ; b). Complete methods perform exhaustive enumeration to find a counter-example to the specification or rule out the existence of counter-examples (hence proving that the specification is true). Hence, complete methods are expensive and difficult to scale. Incomplete methods are conservative (i.e., they cannot always prove that a specification is true even when it is), but are more scalable and can be used inside the training loop for training models to be consistent and verifiable (a; ; a; ; b). address the issue of verification in NLP, most of the recent work has focused on ∞ norm-bounded perturbations for image classification. This paper complements work on incomplete verification methods by extending IBP to NLI where inputs are inherently discrete (to the contrary of images which are continuous). In the NLP context in particular, and have very recently verified CNN, and LSTM models with specifications against over-sensitivity adversaries under synonym replacement. study verification of output length specifications in machine translation models, showing that the outputs of machine translation and image captioning systems can be provably bounded when the inputs are perturbed within a given set. In contrast, this work examines under-sensitivity behaviour: excessive model prediction invariance under arbitrary word combination deletions. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models is an open problem. Neural networks are expressive models that can fit large datasets and achieve strong nominal test accuracy. At the same time however, they can fit data in a way that violates our idea of how they should fit it, from an input attribution perspective. Figure 2 visualises the extent of the problem in NLI: it is, for example, for 20% of the SNLI test set possible to delete 78% or more of premise words while the prediction confidence increases or remains the same. We will next formally describe a specification that checks a model's under-sensitivity, i.e. whether any such reduction exists. The specification addresses model output probabilities when parts of the input text are deleted. To this end, we first introduce the notion of a perturbation space X in (x nom) of an original nominal input x nom. This space contains all possible reductions, i.e. inputs where arbitrarily chosen tokens of the original nominal input x nom are deleted. Note that this space grows exponentially in the length of the input. We would like to verify whether or not there exists any reduced input x − with higher probability for the (nominal) prediction than x nom has. More formally, this can be stated as a specification: whereŷ is the (nominal) model prediction. Determining precisely how prediction probabilities should change when input words are deleted is contentious and prone to inconsistencies: removing stop words for example may lead to little relevant change, while crucial information carrying words (e.g., 'not') can significantly alter the meaning of the sentence in a task. It is important to be cautious and not too restrictive in the specification design, and certain that whatever is specified is desirable. A specification to at least not increase prediction probabilities under arbitrary input deletion is a conservative choice. 3 Other specifications are worth consideration, such as monotonically decreasing certainty as more input is deleted. We will however see that even our very conservative choice of an under-sensitivity specification is hard to positively verify for most inputs in the DAM model. There are different approaches to establish if the Specification is satisfied. With unlimited computational capacity, the property could exhaustively be evaluated for all x − ∈ X in (x nom). Statistically sampling from the reduction space can give an indication of under-sensitivity, but has a very limited coverage rate. Search heuristics can try to identify violations (and be used for 'adversarial' training), but there is no guarantee that a stronger search procedure cannot find more or worse violations. IBP verification on the other hand offers a formal guarantee across the whole space by establishing outer bounds for X in (x nom) and ing bounds on output probabilities. We next give a brief introduction to the Decomposable Attention Model (DAM) which we will later verify. The DAM architecture comprises commonly used neural NLP components, such as word embeddings, attention, and feed-forward networks. Subsequently we introduce Interval Bound Propagation and then bring these together to verify the behaviour of the DAM, i.e. efficiently assert whether an input satisfies Specification. Decomposable Attention. The NLI task takes two word sequences as input -a premise and a hypothesis -and outputs a discrete entailment label prediction in {entailment, neutral, contradiction}. The DAM architecture assumes the input word sequences to be embedded (e.g. as d-dimensional word vectors), i.e. operates on two sequences of input vectors 4:.] denotes concatenation, and I and J are sequence lengths. Word vectors are individually transformed with a vector-valued function F , and pairs thereof are then compared: Note that we follow the notation of , and e ij is not related to a basis vector. In the general model formulation F can be a linear transformation or MLP; this does not affect the derivations made here. Adopting matrix notation across position pairs (i, j), Equation can instead be rewritten in matrix form as E = F (A) F (B) ∈ R I×J, which is used to compute two attention masks -one over each sequence -by normalising across i or across j: These two attention masks serve as coefficients in a convex combination over the original word vectors, aggregating each of the two sequences: That is, A and B hold attention-aggregated word vectors from A and B at positions j = 1,..., J and i = 1,..., I, respectively. These are joined with the original word representations, mixed using a position-wise feed-forward network G: R 2d → R d and finally summed into a single vector representation for each sequence: As a last step, a logit vector with entries for each class is computed as H ([v 1, v 2] ), where H: R 2d → R C is again a feed-forward network, and C the number of output classes. Interval Bound Propagation. IBP is an incomplete but efficient verification method that can be used to verify input-output relationships. It tracks how a part of the input space (in our case: the perturbation space X in (x nom)) propagates forward through the network. IBP starts with an axisaligned bounding box surrounding X in (x nom), and uses interval arithmetic to obtain an axis-aligned bounding box for the output set. Formally, let us assume that the neural network is defined by a sequence of transformations h k for each of its K layers. That is, for z 0 ∈ X in (x nom) The output z K ∈ R C has C logits corresponding to C classes. IBP bounds the activation z k of each layer by an axis-aligned bounding box (i.e., z k ≤ z k ≤ z k) using interval arithmetic. We have for each coordinate z k,i of z k: z k,i = min where e i is the standard i th basis vector. Finally, at the last layer, an upper bound on the worst-case violation of the specification can be evaluated quickly from the logit lower and upper bounds z K and z K respectively, as the bounds translate directly into bounds for the softmax probabilities. IBP can be performed in parallel while running the nominal forward pass. However in general the output bounds are loose, which is exacerbated with increasing network depth. Consequently IBP over-approximates the true extent of the image of X in (x nom) in output space, and can in false negatives. It is thus in practice important to keep the bounds as tight as possible. IBP can be used at test time for verification, but also for training, minimising loss terms derived from the logit bounds. IBP has been used on MLPs and convolutional networks with monotonic activations (; . One technical contribution of this work is to apply it to a model with an attention component (Section 5). To address under-sensitivity, we aim to verify Specification for the DAM model. If the upper probability bound z K of the entire perturbation space X in (x nom) is smaller than the probability P (ŷ|x nom) for the predicted class, then the specification is verified. That is ∀z 0 ∈ X in (x nom): Using this inequality, we can assert whether Specification is verifiably satisfied for any given x nom, i.e. whether or not there exist any reduced samples with higher probability than x nom. Overview. We will first describe the model behaviour when removing a single word at fixed position, and then extend this to deleting single words at any position, and finally generalize this to arbitrary multi-token deletions. One key difference to IBP bounds of other architectural components, such as CNNs or feed-forward layers, is the need for bounds on the attention normalisation, which has to take into account per-token upper and lower bounds. We will exploit the fact that each vector of B is a convex combination of the J vectors that constitute B (and similarly for A). Hence, component-wise bounds on B can be obtained efficiently by maximising over those J vectors. A and B are then inputs to a regular feed-forward network (G followed by H), for which IBP can be used. Deleting Single Word: Particular Position. We first describe how model variables behave when an individual token at a fixed position r is removed from one of the sequences. Without loss of generality, we delete words from the second sequence, noting that the model has a symmetric architecture and that the same can be derived for the other input sequence. We denote all ing quantities with a bar (as inB). That is, when removing a single token at position r: whereasĀ = A. Since F is applied per-position in the sequence, the effect of word deletion remains isolated at this point; the matrix productĒ = F (Ā) F (B) ∈ R I×(J−1) has identical entries as before, but the r-th column disappears. (A) has identical entries compared to P (A), but the r-th column removed. That is, for i = 1,..., I and j = 1,..., J s.t. j = r: The attention maskP (B) on the other hand has renormalized entries. The values retain their relative order, yet the entries are larger because the r-th normalisation summand is removed. For j = r: Hence we can computeP (B) ij in closed form as To summarise the above: attention weights P (B) ij remain largely unchanged when deleting token r, but are rescaled to take into account missing normalisation mass. In the next step, the model computes convex combinationsĀ andB. Concretely,Ā =Ā ·P (A) ∈ R d×(J−1) has unchanged elements compared to before (as A remains unchanged), but the r-th column is removed. ForB =B · (P (B) ) ∈ R d× I the dimensionality remains unchanged, butB has fewer elements and (P (B) ) is renormalised accordingly. Note that all this can still be computed in closed form using Equation, i.e. without need for IBP thus far, and these quantities can further be fed through the remaining network layers G and H to obtain probabilities. Deleting Single Word: Arbitrary Position. We have reached the point whereĀ,B,Ā andB are derived in closed form, for fixed position r. These can be computed exactly without approximation, for deleted words at any position r in the sequence. Extending this to arbitrary single-word deletions, we take the elementwise minimum/ maximum across all possible single word deletions, e.g. which establishes upper and lower bounds for each element, and analogously for the other matrices. In the DAM architecture, these matrices are next fed into dense feed-forward layers G (Equations and) and H, each with two layers. 5 We use IBP to propagate bounds through these layers, feeding in bounds onĀ,B,Ā andB as described above. As a , after propagating these bounds through G and H, we obtain bounds on output logits (and consequently on probabilities) for deletions of a single token at any position. One further simplification is possible: we computev 2 directly from v 2 by subtracting the r-th summand for fixed r (see Equation). Generalising this to arbitrary positions r, we can bound the subtracted vector with max r=1,...,J {G([Ā r ;b r])} and min r=1,...,J {G([Ā r ;b r])}, and thus directly obtain bounds forv 2. Deleting Several Words. We have described the behaviour of intermediate representations (and bounds for them) under deletions of arbitrary individual words; the case of removing several words is similar. The values of remaining individual word vectors a i and b j naturally remain unchanged. The previously established bounds for single word deletions can be partly re-used to establish bounds for arbitrary multi-word deletions, see appendix A for more detail. The ing bounds forv 1 and v 2 are then input to a regular feed-forward network, for which IBP can be used. We now evaluate to which extent the DAM model verifiably satisfies the Specification against under-sensitivity, and we will furthermore compare different training approaches. Experiments are conducted on two large-scale NLI datasets: SNLI and multiNLI , henceforth MNLI. addressed deletions of hypothesis words in SNLI, we establish the phenomenon also for MNLI, and for premise reductions. In our experiments we use premise reductions, noting that under-sensitivity is also present for hypotheses (see Fig. 2). For SNLI we use standard dataset splits, tuning hyperparameters on the development set and reporting for the test set. For MNLI we split off 2000 samples from the development set for validation purposes and use the remaining samples as test set. We use the same types of feed-forward components, layer size, dropout, and word embedding hyperparameters described by. We evaluate with respect to the following metrics: 1. Accuracy: Standard test accuracy. 2. Verified Accuracy: This metric measures whether both i) the prediction is correct ii) it can be verified that no reduction with higher probability exists, using IBP verification. 3. Beam Search Heuristic: This metric uses beam search to find specification violations in the perturbation space, following the protocol of. Search begins from the full sequence, gradually deleting words while keeping a beam of width 10. This metric then measures whether both i) the search heuristic found no counterexample, and ii) the prediction is correct. Note that this heuristic does not cover the full perturbation space, i.e. does not suffice to rule out counterexamples to the specification. This metric provides an upper bound for verified accuracy. Training Methods We will compare the following training methods: 1. Standard Training: This provides a baseline for under-sensitivity behaviour under standard log-likelihood training. 2. Data Augmentation: A first and comparatively simple way to address under-sensitivity is by adding training samples with random word subsets deleted, and penalizing the model with a loss proportional to the specification violation. 3. Adversarial Training: Here we use a more systematic approach than random word deletions: we search within the perturbation space for inputs with large differences between nominal prediction probability and reduced probability, i.e. the strongest specification violations. We compare both i) random adversarial search that samples 512 randomly reduced Table 1: Experimental : accuracy vs. verified accuracy using IBP, for different training methods. All models tuned for verified accuracy, numbers in %. perturbations and picks the strongest violation, and ii) beam search with width 10, following the protocol of. Both for data augmentation and adversarial training, altered samples are recomputed throughout training. observed that entropy regularization on prediction probabilities can partially mitigate the severity of under-sensitivity. Here we use IBP verification as described in Section 5, which provides upper bounds on the prediction probability of arbitrarily reduced inputs (Eq.). We penalize the model with an auxiliary hinge loss on the difference between the upper probability bound for the gold label y and the nominal probability P (y|x nom). Note that the upper bound serves as a proxy for the adversarial objective, as it over-approximates the probabilities of arbitrary reduced samples, covering the full reduction space comprehensively. Training Details The training methods described above make use of an additive contribution to the training loss besides standard log-likelihood. We tune the scale of the respective contribution in [0.01, 0.1, 1.0, 10.0, 100.0]. All experiments used a learning rate of 0.001, Adam optimizer, and batch size 128. We perform early stopping with respect to verified accuracy, for a maximum of 3M training steps. For verified training, we found it useful to continuously phase in the volume of the perturbation space to its maximum size, similar to. Concretely, we compute the per-dimension center of upper and lower bound, and start linearly increasing its volume until it reaches the full perturbation space volume. Similarly we phase in the perturbation radius, i.e. the maximum number of words deleted from 1 to the maximum sequence length of 48. We tune phasein intervals in training steps. We also experimented with over-inflating the perturbation volume to larger than its real size at training time, as well as randomly sampling a maximum perturbation radius during training, neither of which improved verifiability . Evaluating the Effectiveness of IBP for Verification. Tables 1a and 1b show the main . For both datasets, a non-negligible portion of data points can be verified using IBP. The gap between (standard) accuracy however is striking: only a small fraction of correctly predicted inputs is actually verifiably not under-sensitive. Note that IBP accuracy is naturally bounded above by the beam search heuristic, which does however not cover the full reduction space, and overestimates verification rates. IBP verification becomes particularly effective when adding the IBP-verifiability objective during training, verifying 18.36% and 17.44% of samples on SNLI and MNLI. Verifiability does however come at a cost: test accuracy is generally decreased when tuning for verifiability, compared to. This highlights a shortcoming of test accuracy as a metric: it does not reflect the under-sensitivity problem. Once under-sensitivity is taken into account by dedicated training objectives, or by tuning for verification rates, nominal accuracy suffers. Computational Efficiency of IBP Verification. Table 2 gives a breakdown of the computational cost incurred for verification, both empirically, and the theoretical worst-case number of forward passes required per sample. IBP verification comes with small computational overhead compared to a standard forward pass, which is incurred by propagating upper and lower interval bounds through the network once. A full oracle is computationally infeasible, instead we used an exhaustive search oracle, but only up to a maximum budget of 200K forward passes per sample. Even when stopping as soon as a single reduced sample is found that violates the specification, the incurred time is orders of magnitude larger than verification via IBP. Comparing Training Methods. We next discuss the differences between training methods, and how they reflect in verified model behaviour. In absolute terms, standard training does not adhere to the under-sensitivity specification well, neither on SNLI nor MNLI. Data augmentation and random adversarial training lead to slightly different on the two datasets, albeit without major improvements. These methods have a strong random component in their choice of deletions, and this tends to lead to lower verification rates on MNLI, where premises are on average 6.2 tokens longer, and the reduction space is correspondingly larger. Beam search adversarial training leads to improved verification rates on SNLI, yet not for MNLI, and it is noteworthy that when also trained with beam search adversarial samples, beam search evaluation improves substantially. Entropy regularization improves verified accuracy over standard training; this is in line with previous observations that it mitigates under-sensitivity behaviour, made by. Finally, the dedicated IBP-Training objective substantially raises verification rates compared to all other approaches. In an ablation (Table 3) we evaluate performance on short sequences (up to 12 tokens) in the SNLI test set: here an exhaustive search over all possible reductions is feasible. Still the absolute verification rates are low in absolute terms, but we observe that shorter sequences are comparatively easier to verify, and that the incomplete IBP verification can approach the verification levels of the complete oracle (see rows 1,2,5, and 6). For adversarial training (rows 3 and 4), however, oracle verification rates are much closer to the Beam Search Heuristic. This suggests that i) for short sequences the smaller perturbation space can be covered better by beam search, and ii) adversarial training can lead to high verifiability on short sequences, but it fits a model in a way that in loose IBP bounds. Verification of a specification offers a stronger form of robustness than robustness to adversarial samples. Adversarial accuracy, as e.g. derived from beam search, might conceptually be easier to compute, yet has no guarantees to find all or the strongest violations. In fact, evaluating against weak adversaries under-estimates the extent of a problem and may lead to a false sense of confidence. IBP verification can provide guarantees on the nonexistence of reduced inputs, but it is incomplete and can have false negatives. Observations of comparatively low verification or adversarial accuracy rates-as in this workare not new, and have been found to be a general problem of datasets with high sample complexity . We emphasise that under-sensitivity is a very challenging problem to address; even the relatively conservative specification of non-increasing probability under deletion cannot be fulfilled for the majority of test samples under the baselines tested. We see the verification of the attention-based DAM model as a stepping stone towards the verification of larger and more performant attention-based architectures, such as BERT. Following the derivations here, token deletion bounds could similarly be propagated through BERT's self-attention layer. Towards this end, however, we see two main hurdles: i) BERT's network depth, ing in gradually looser IBP bounds ii) BERT's word piece tokenisation, which requires special consideration in conjunction with token-level perturbations. We have investigated under-sensitivity to input text deletions in NLI and recast the problem as one of formally verifying a specification on model behaviour. We have described how Interval Bound Propagation can be used in order to verify the popular Decomposable Attention Model, and have then compared several training methods in their ability to address and be verified against undersensitivity. We observed that only a relatively small fraction of data points can be positively verified, but that IBP-training in particular is capable of improving verified accuracy.
Formal verification of a specification on a model's prediction undersensitivity using Interval Bound Propagation
906
scitldr
Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out of the box for representative models: ResNet50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization. Deep neural networks have achieved state-of-the-art performance on a wide variety of computer vision, audio, and natural language processing (NLP) tasks. This has ed in an explosion of interest around techniques to reduce the memory footprint and energy consumption of neural network training and inference . Although there are a number of methods to address some of these issues for inference, the most effective method for training is using reduced precision numerical formats. While 32-bit floating point (FP32) is the most common data format for neural network training, recent hardware have leveraged techniques that allow for training with 16-bit data formats (Köster et al., 2017;). However, 8-bit precision training remains an open challenge . Current FP8 training methodologies (; require either specialized chunk-based accumulation, stochastic rounding techniques, loss scaling or maintaining some layers of the network in higher precision. Tuning these knobs is non-intuitive and requires significant experimentation for each individual network. Accelerating the adoption of 8-bit data in training DNNs requires a hardware-friendly and out-ofthe-box implementation of FP8. Due to the reduced number of mantissa bits, 8-bit multipliers are smaller and consume less power compared to higher bit representations. In this work we describe a novel 8-bit floating point (FP8) format -shifted and squeezed FP8 (S2FP8) -which has the following advantages compared to previously proposed 8-bit training methodologies: • S2FP8 eliminates the need for loss scaling, which requires significant tuning of the loss scale values and schedule for individual topologies • Leveraged by the forward and backward passes of model training, S2FP8 is effective in adjusting the range of gradients and also of activations and weights • S2FP8 does not require keeping the first and last layer in FP32 precision, which is needed for other approaches, however maintains the master weights and accumulations inside the matrix multipliers in FP32 We demonstrate across image classification, translation, and recommendation models that S2FP8 outperforms previous 8-bit approaches, and reaches the accuracy of FP32 models without any additional hyperparameter tuning. The success of 32-bit floating point data type in training deep neural networks has increased interest in the feasibility of even lower precision training. The exponential demand for compute involved in training these deep neural networks has lead to multiple advancements in lower precision data types. Several studies have developed techniques such as loss scaling, stochastic rounding, and others to train effectively in 16-bit (; ; Azim), along with associated hardware support . Using 16-bit fixed point, showed that stochastic rounding techniques were crucial for model convergence even for simple convolutional neural networks. As noted in , Google's bfloat16 format has the same number of exponent bits as FP32, leading the success of that format without commonly requiring hardware intensive requirements such as stochastic rounding or other framework level techniques such as loss scaling. Although 8-bit formats have significant performance and memory advantages, convergence is especially challenging due to loss of accuracy in the backpropogated gradient values. demonstrated training models with matrix multiplications and convolutions in FP8 but they use FP16 with chunk-based accumulations and stochastic rounding hardware. also demonstrated success with FP8, accumulating in FP32 and using loss scaling techniques on ResNets, Transformer and GNMT networks. However, they too require the first and last layers of the model to be in FP32, and similar to leverage Stochastic Rounding techniques to maintain model accuracy. Unlike S2FP8 proposed in this work, both of these FP8 training techniques emphasize the need for efficient loss scaling, rounding hardware and restriction on some layers being in higher precision. quantized weights, activations and gradients of AlexNet to 1, 2 and 6 bits respectively. But they also need to maintain the first and last convolution layers in full precision and stochastically quantize the gradients. demonstrate using integers for training LeNet-5 and AlexNet with 8-bits for activations, error and gradients and 2-bits for weights. However, these approaches also required custom tuning such as novel initialization techniques and layer wise scaling instead of Batch Normalization and Softmax. These approaches lack generalizability to other models, requiring significant fine tuning. To the best of our knowledge, there does not exist an out-of-the-box solution using FP8 in training deep learning topologies without the need for tuned loss scaling techniques, requirements of certain layers being in full precision along with efficient hardware rounding schemes like Stochastic Rounding. 3 SHIFTED AND SQUEEZED 8-BIT FLOATING POINT FORMAT The FP8 format, with 2 bits of mantissa and 5 bits of exponent ) is both narrow (i.e., its dynamic range is very limited, from 2 −16 to 2 16) and has lower accuracy (the machine epsilon is only 2 −3). Figure A1 illustrates the range and accuracy of FP8. In contrast, FP32 ranges from 2 −149 to 2 128 with a machine-epsilon of 2 −24 (Table A1). On the other hand, tensors involved in neural networks (weights, activations and gradients) are spread across varying scales. As illustrated in Figure 1, the tensor distributions change over the course of training, spanning different orders of magnitude. As a , 8-bit training usually requires a combination of multiple techniques to capture the full dynamic range of values for model training. Some of these techniques include: • Loss scaling scales the loss L(w) by a constant λ before backpropagation. This makes the gradients artificially larger, allowing them to fit within the FP8 range. Gradients are then scaled down before being accumulated into the trainable weights as shown in Equation 6 • Stochastic rounding alleviate quantization errors by capturing some of the information discarded when truncating to lower precision at the output of a GEMM operation Between these two techniques, loss scaling is more critical; once the magnitude of the gradients can no longer be represented in the FP8 range, training convergence will not be possible. However, loss scaling only modifies the gradients. Weights and activations can also (albeit admittedly less frequently) exceed the FP8's representable range of [2 −16, 2 16]. In those scenarios, convergence can also be affected. The issue with loss scaling is that it requires user interaction. Models have to be modified, and, more importantly, tedious empirical tuning is required to find the correct loss scaling schedule. While some networks can be trained with constant loss scaling, some, notably Transformers, require dynamic "back-off" and improved loss scaling. This requires significant trial and error to tune the scaling schedule, slowing down wide adoption of low-precision numerical formats. To alleviate these issues and make neural network training possible with no model modifications or hyperparameter tuning, we propose a new 8-bit floating point format. Consider a tensor X of size N, i.e., X = {X i} N i=1. Instead of directly encoding each X i in FP8, we store X using N FP8 numbers accompanied by two (squeeze and shift) factors α and β (the "statistics" -see Figure 2). Figure 3: Impact of the Shifted and Squeezed transformation log 2 |Y | = α log 2 |X| + β. α let the distribution be as wide as necessary (though, with an associated loss of precision), and β let us shift the distribution around any value. For X i = 0, X and Y are then related through where the ± is chosen so that X i and Y i have the same sign. This representation allows for α and β be chosen so that together with tensor Y they capture most of the dynamic range of the tensor X. As we will see in section 4, this is all that is necessary to train networks using 8-bit floating point numbers. In order for Y to be a tensor suitable to be represented by FP8 numbers, we enforce that it has zero mean and a maximum value within the dynamic range of FP8 (e.g. 15): where the notation indicates that the sum and the max, respectively, ignore any i such that Y i = 0. Those equations ensure that log 2 (|Y |) values are distributed with zero mean and each is less than 15, which is ideal for an FP8 format. we find This new tensor format in the training procedure (forward pass, backward pass, weight update) described in Figure 4. Forward and backward MatMul use this new S2FP8 format. Master weights are kept in FP32 and updated using S2FP8 gradients. Accumulations inside the GEMM kernel are kept in full FP32 precision. Figure 3 illustrates the impact of α and β. By having those two extra degrees of freedom for each tensor, majority of the dynamic range of each tensor can now be captured, whether very small (β > 0), very large (β < 1), very narrow (α > 1)) or very wide (α < 1). One way to interpret α and β is to consider them as parameters of a distribution generating the tensor values log 2 (|X i |). We can then say that, by continuously computing α and β, we are effectively learning the distribution of log 2 (|X i |). Figure 5c shows the evolution of µ, m, α and β for a particular tensor of ResNet-20. We see that α and β converge to, approximately, 5 and 21, respectively. From Equation 1, we conclude that:, from FP32 to S2FP8. When using S2FP8 for training, forward and backward GEMM's only use S2FP8. The master weights are kept in FP32 and updated during the update step. • since α > 1, this means that X is expanded into Y, i.e., X is more narrow than what FP8 allows • since β > 0, this means that X is right-shifted into Y, i.e., X is smaller than what FP8 allows At convergence, those α and β values represent the distribution of each converged tensor. Notice that all statistics stabilize in the last third of the training, where the learning rate is decreased, indicating the network is converging to its final state. In this section, we compare S2FP8 training with baseline FP32 and FP8 training with and without loss scaling for: Residual Networks of varying depths on the CIFAR-10 and ImageNet datasets, Transformer on IWSLT'15 EnglishVietnamese dataset , and Neural Collaborative Filtering (NCF) on MovieLens 1 Million dataset . For our experiments, we use the open source Tensorflow Models 1 repository for ResNet and NCF, Tensor2Tensor for Transformer with added S2FP8 data type simulation support using the methodology described in subsection 4.1. For a given model, we keep the hyperparameters consistent across FP32, FP8 and S2FP8 evaluations. We simulated S2FP8 by inserting appropriate truncation function throughout the network, before and after every convolution and matrix-matrix product operations, during both the forward and backward passes. The rest of the network is kept in FP32, and those truncation simulate the low-precision training described in subsection 3.2. The truncation function takes as input a tensor X, computes its magnitude mean and maximum, computes the appropriate α and β and finally truncates X by computing where truncate FP8 is a usual FP8 truncation function with RNE (round-to-nearest, with ties broken by rounding to even) rounding which is easier to implement and most widely supported in hardware. We first present with Residual Networks of varying depths on the CIFAR-10 image recognition dataset. We trained the model on 1 GPU using standard parameters: 250 epochs, batchsize of 128, SGD with momentum of 0.9, initial learning rate of 0.1 decreased by a factor of 10 after epochs 100, 150 and 200. Table 1 and Figure A2 presents the . We observe that S2FP8 reaches almost exactly the FP32 baseline, sometimes even improving over it. Out-of-the-box FP8 does not converge and has very poor accuracy. Finally, FP8 with constant loss scaling of 100 (FP8+LS We also evaluate S2FP8 on the 1000 class ImageNet dataset. Here, we trained the network on 4 GPUs using standard parameters: 90 epochs, batchsize of 256, SGD with momentum of 0.9, initial learning rate of 0.1 decreased by a factor of 10 after epochs 30, 60, 80 and 90. Table 2 and Figure 6 present the . Again, we observe that S2FP8 gets very close to the FP32 baseline. Out-of-the-box FP8 quickly diverges and does not converge at all. For FP8 with loss scaling to converge, one has to not truncate the first and last layer, as consistent with, which we denote as Ex in Table 2 below. A loss scaling of 10,000 can then be used to reach the baseline (FP8+LS(10k)+Ex). Finally, stochastic rounding can be added and it slightly improves the precision (FP8+LS(100k)+Ex+SR). However, both those cases are not out-of-the-box, as they require loss scaling tuning and some layers to be kept in full precision. S2FP8 does not suffer from that, thanks to its improved quantization: all layers can be truncated and no loss scaling is required. We also tested S2FP8 on a small Transformer (Transformer Tiny) on the English-Vietnamese dataset. The model has 2 hidden layers of size 128, and a filter of size 512, and is trained using Adam optimizer . Table 3 and Figure 7 show the , where we compare FP32, S2FP8 and FP8 with exponential loss scaling. We tried many loss scaling schedules (constant and exponential, with various initializations) and report the best . As one can see, S2FP8 reaches the baseline with no hyperparameter tuning. FP8, on the other hand, does not, even after extensive loss scaling tuning. This shows the value of an out-of-the-box method for the user. En-Vi FP32 S2FP8 ∆ FP8 FP8+LS(exp) Transformer tiny 25.3 25.3 0.0 NaN 21.3 Table 3: BLEU Score (from 0 to 100) for translation task on the EnglishVietnamese dataset with Transformer tiny. The Neural Collaborative Filtering (NCF) network comprises of embeddings for users and items from the MovieLens dataset, that are then passed to a Multi-Layer Perceptron(MLP) network to learn the user-item interaction. Matrix-multiplication operations are the building blocks of such models. We compare S2FP8 with FP32 and FP8 without loss scaling. We simulate Matrix-Multiplications and look-ups from the embeddings in S2FP8 and compare it to FP8 with RNE. We trained the model on the MovieLens 1 Million dataset with the following standard paramaters: 20 iterations, batchsize of 1024 on 4 GPUs, 8 predictive factors, learning rate of 0.0005 using the Adam optimizer. Figure 8 and Table 4 show the , where we compare FP32, S2FP8 and FP8 without loss scaling. This again shows that S2FP8 easily reaches the baseline out-of-the-box, without tuning of any sort. FP8 gets relatively close, but cannot reach the baseline. S2FP8 is a new data type and requires its own circuitry to be implemented in a tensor processing engine. However, the added overhead is very minimal and affects neither data throughput nor compute speed. In order to convert FP32 tensors into S2FP8, two hardware (HW) components are needed. One is to calculate each tensor's statistics (Equation 3), which bring minimal HW complexity. To make compute operations even easier these statistics could be stored in lower precision such as FP8/INT8. The other component is to adjust the exponent and mantissa of all those tensor elements by applying the squeeze (α) and shift (β) factors in Equation 4 before truncating them into their 8-bit placeholders. The shift could be done using simple element-wise add/subtract operations on the exponents, and element-wise squeeze could be applied to the mantissa portions. Another consideration is within the tensor processing engine(e.g., GEMM engine) which requires the α and β factors while doing the calculations. The FP32 will be converted back to S2FP8 when needed (e.g., to store back in memory) as shown in Figure 4. We introduce a novel 8-bit floating point data type (S2FP8), that gives competitive performance in comparison to state-of-the-art FP32 baselines over a range of representative networks. S2FP8 makes use of shifted and squeezed factors to shift and rescale the range of tensors prior to truncation. S2FP8 allows training of neural networks with an 8-bit format while eliminating the need for loss scaling tuning, hardware-complex rounding techniques. In addition, compared to existing FP8 implementations we also eliminate the restriction of maintaining the first and last layers in FP32. Decreasing Movielens 1 million FP32 S2FP8 ∆ FP8 NCF 0.666 0.663 0.003 0.633 Figure A1: The range and precision of FP8. Bar indicate the number density between each power of 2. Since FP8 has 2 mantissa bit, the density is 4 (except in the denormals), and the associated machine epsilon is 2 −3 = 1/8. The normal representable range goes from 2 −14 to (1 − 2 −3)2 16, with denormals from 2 −16 to 2 −14. ∂(λL) ∂w (w) = λ ∂L ∂w (w) ⇒ w (k+1) = w (k) − α 1 λ ∂(λL) ∂w (w (k) ). Step L 2 Loss FP32 S2FP8 Figure A2: Convergence of ResNet-50 with the CIFAR-10 dataset
We propose a novel 8-bit format that eliminates the need for loss scaling, stochastic rounding, and other low precision techniques
907
scitldr
Variational Auto Encoders (VAE) are capable of generating realistic images, sounds and video sequences. From practitioners point of view, we are usually interested in solving problems where tasks are learned sequentially, in a way that avoids revisiting all previous data at each stage. We address this problem by introducing a conceptually simple and scalable end-to-end approach of incorporating past knowledge by learning prior directly from the data. We consider scalable boosting-like approximation for intractable theoretical optimal prior. We provide empirical studies on two commonly used benchmarks, namely MNIST and Fashion MNIST on disjoint sequential image generation tasks. For each dataset proposed method delivers the best among comparable approaches, avoiding catastrophic forgetting in a fully automatic way with a fixed model architecture. Since most of the real-world datasets are unlabeled, unsupervised learning is an essential part of the machine learning field. Generative models allow us to obtain samples from observed empirical distributions of the complicated high-dimensional objects such as images, sounds or texts. This work is mostly devoted to VAEs with the focus on incremental learning setting. It was observed that VAEs ignore dimensions of latent variables and produce blurred reconstructions (; Sønderby et al., 2016). There are several approaches to address these issues, including amortization gap reduction , KL-term annealing and alternative optimization objectives introduction . In all cases, it was observed that the choice of the prior distribution is highly important and use of default Gaussian prior overregularizes the encoder. In this work, we address the problem of constructing the optimal prior for VAE. The form of optimal prior was obtained by maximizing a lower bound of the marginal likelihood (ELBO) as the aggregated posterior over the whole training dataset. To construct reasonable approximation, we consider a method of greedy KL-divergence projection. Applying the maximum entropy approach allows us to formulate a feasible optimization problem and avoid overfitting. The greedy manner in which components are added to prior reveals a high potential of the method in an incremental learning setting since we expect prior components to store information about previously learned tasks and overcome catastrophic forgetting (; ;). From practitioners point of view, it is essential to be able to store one model, capable of solving several tasks arriving sequentially. Hence, we propose the algorithm with one pair of encoderdecoder and update only the prior. We validate our method on the disjoint sequential image generation tasks. We consider MNIST and Fashion-MNIST datasets. VAEs consider two-step generative process by a prior over latent space p(z) and a conditional generative distribution p θ (x|z), parametrized by a deep neural network. Given empirical data distribution p e (x) = 1 N N n=1 δ(x − x n) we aim at maximizing expected marginal log-likelihood. Following the variational auto-encoder architecture, amortized inference is proposed by choosing variational c A. Authors. posterior distribution q φ (z|x) to be parametrized by DNN, ing in a following objective: To obtain a richer prior distribution one may combine variational and empirical bayes approaches and optimize the objective over the prior distribution. For a given empirical density, optimal prior is a mixture of posterior distributions in all points of the training dataset. Clearly, such prior leads to overfitting. Hence, keeping the same functional form, truncated version with K presudoinputs was proposed as VampPrior. In the present paper, we address two crucial drawbacks of the VampPrior. Firstly, for large values of K variational inference will be very computationally expensive. Even for the MNIST dataset used the mixture of 500 components. Secondly, it is not clear how to choose K for the particular dataset, as we have a trade-off between prior capacity and overfitting. For this purpose, we adapt maxentropy variational inference framework . We add components to the prior during training in a greedy manner and show that in such setting, fewer components are needed for the comparable performance. Assume, that we want to approximate complex distribution p * by a mixture of simple components p i Each component of the mixture is learned greedily. At the first stage we initialize mixture by a standard normal distribution. Afterwards, we add new component h from some family of distributions Q, with the weight α one by one p t = αh + (1 − α)p t−1, α ∈ in two stages: 1. Find optimal h ∈ Q. We apply Maximum Entropy approach to minimize KL-divergence between mixture and target distribution. This task can be reformulated as a following KL- 2. Choose α, corresponding to the optimal h: α * = arg min In this work, we suggest combining boosting algorithm for density distributions with the idea of VampPrior to deal with the problem of catastrophic forgetting in the incremental learning setting. Proposed algorithm for training VAE consists of two steps. On the first one we optimize evidence lower bound w.r.t. the parameters of the encoder and decoder. On the second stage, we learn new component h for the prior distribution, and its weight α, keeping parameters of the encoder and decoder fixed. We learn each component to be posterior given the learnable artificial input u: q φ (z|u) with target density being mixture of posteriors in all points from random subset M of the whoel training dataset D. Parameters of the first component u 0 are obtained by ELBO maximization simultaneously with the network parameters, as shown in the Algorithm 1. In the incremental learning setting, we do not have access to the whole dataset. Instead, subsets D 1,... D T arrive sequentially and may come from different domains. With the first task D 1 we Algorithm 1: BooVAE algorithm, λ, Maximal number of components K Output: p K, θ *, φ * Choose random subset M ⊂ D and initialize prior p 0 = q φ (z|u 0) and k = 1; θ *, φ *, u 0 = arg max L(p 0, θ, φ); follow Algorithm 1 to obtain prior p and optimal values of the network parameters. Starting from t > 1, we add regularization to ELBO, which ensures that the model keeps encoding and decoding learned prior components in the same manner (see Appendix B). Since we do not have access to the whole dataset anymore, the form of optimal prior also changes. We use prior of the previous step as a proxy for an optimal one (see Appendix C) p * (t) ≈ t−1 We perform experiments on MNIST dataset , containing ten hand-written digits and on fashion MNIST , which also has ten classes of different clothing items. Therefore, we have ten tasks in each dataset for sequential learning. To evaluate the performance of the VAE approach, we estimate a negative log-likelihood (NLL) on the test set, calculated by importance sampling method, with 5000 samples for each observation. In an offline setting, we aim at comparing our method to VampPrior and Mixture of Gaussians prior. In the first case, each component is a posterior distribution given learnable pseudo-input, while in the second we learn each component as a Gaussian with diagonal covariance matrix in the latent space. For both priors, all the component are learned simultaneously. Results in the tables above demonstrate that BooVAE manages to overcome catastrophic forgetting better than pure EWC regularization with standard normal prior. Unstable NLL values for VAE with standard prior and EWC can be explained by the fact that some classes in the dataset are quite similar and even though model forgets old class, knowledge about a new class let her reconstruct it relatively good, ing in deceptively better . In Appendix D we provide for each task separately to illustrate this effect further. Figure 1: Samples from prior after training on 10 tasks incrementally. Our approach is capable of sampling images from different tasks, while other methods either stick to the latest seen class, or are able to reproduce images from simplest classes On Figure 1, we provide samples from the prior illustrating generation ability of all the methods after training on all of the ten tasks sequentially. Appendix E and F provide detailed qualitative and quantitative (Figure 3) evaluation of the samples diversity for different models. In this work, we propose a method for learning a data-driven prior, using a MM algorithm which allows us to reduce the number of components in the prior distribution without the loss of performance. Based on this method, we suggest an efficient algorithm for incremental VAE learning which has single encoder-decoder pair for all the tasks and drastically reduces catastrophic forgetting. To make sure that model keeps encoding and decoding prior components as it did during the training of the corresponding task, we add a regularization term to ELBO. For that purpose at the end of each task we compute reconstructions of the components' mean value p θ j (x|µ i,j) = p ij (x). All thing considered, objective for the maximization step is the following: When training a new component and its' weight, we want optimal prior that we approximate to be close to the mixture of variational posteriors at all the training points, seen by the model. Since we don't have access to the data from the previous tasks, we suggest using trained prior as a proxy for the corresponding part of the mixture. Therefore, optimal prior for tasks 1: t can be expressed, using optimal prior from the previous step and training dataset from the current stage: During training, we approximate optimal prior by the mixture p (t), using random subset containing n observations of the given dataset M t ⊂ D t. Therefore, the optimal prior can be approximated in the following way: To evaluate diversity of the generated images, we calculate KL-divergence between Bernoulli distribution with equal probability for each class and empirical distribution of classes generated by the model. Since we want to assign classes to generated images automatically, we train classification network, which can classify images with high probability (more than 90%), use it to label generated objects and calculated the empirical distribution over 10000 generated samples. Figure 3 depicts proposed metric to evaluate diversity. We want this value to stay as close as possible to 0 as the number of tasks grows since it will mean that model keeps generating diverse images. We can see a drastic difference between boosting and other approaches: samples from prior, estimated by the boosting approach are very close to uniform in contrast to all the comparable methods.
Novel algorithm for Incremental learning of VAE with fixed architecture
908
scitldr
The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem. Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++. The human capacity to learn new concepts using only a handful of samples is immense. In stark contrast, modern deep neural networks need, at a minimum, thousands of samples before they begin to learn representations that can generalize well to unseen data-points BID11 BID9, and mostly fail when the data available is scarce. The fact that standard deep neural networks fail in the small data regime can provide hints about some of their potential shortcomings. Solving those shortcomings has the potential to open the door to understanding intelligence and advancing Artificial Intelligence. Few-shot learning encapsulates a family of methods that can learn new concepts with only a handful of data-points (usually 1-5 samples per concept). This possibility is attractive for a number of reasons. First, few-shot learning would reduce the need for data collection and labelling, thus reducing the time and resources needed to build robust machine learning models. Second, it would potentially reduce training and fine-tuning times for adapting systems to newly acquired data. Third, in many real-world problems there are only a few samples available per class and the collection of additional data is either remarkably time-consuming and costly or altogether impossible, thus necessitating the need to learn from the available few samples. The nature of few-shot learning makes it a very hard problem if no prior knowledge exists about the task at hand. For a model to be able to learn a robust model from a few samples, knowledge transfer (see e.g. BID4 from other similar tasks is key. However, manual knowledge transfer from one task to another for the purpose of fine-tuning on a new task can be a time consuming and ultimately inefficient process. Meta-learning BID22 BID24, or learning to learn BID23, can instead be used to automatically learn across-task knowledge usually referred to as across-task (or sometimes slow) knowledge such that our model can, at inference time, quickly acquire task-specific (or fast) knowledge from new tasks using only a few samples. Meta-learning can be broadly defined as a class of machine learning models that become more proficient at learning with more experience, thus learning how to learn. More specifically meta-learning involves learning at two levels. At the task-level, where the base-model is required to acquire task-specific (fast) knowledge rapidly, and at the meta-level, where the meta-model is required to slowly learn across-task (slow) knowledge. Recent work in meta-learning has produced One can see that 2 out of 3 seeds with the original strided MAML seem to become unstable and erratic, whereas all 3 of the strided MAML++ models seem to consistently converge very fast, to much higher generalization accuracy without any stability issues.state of the art in a variety of settings BID26 BID2 BID27 BID0 BID25 BID14 BID15 BID1 BID3 BID17. The application of meta-learning in the fewshot learning setting has enabled the overwhelming majority of the current state of the art few-shot learning methods BID25 BID19 BID6 BID7 BID15 BID17. One such method, known for its simplicity and state of the art performance, is Model Agnostic Meta-Learning (MAML) BID7. In MAML, the authors propose learning an initialization for a base-model such that after applying a very small number of gradient steps with respect to a training set on the base-model, the adapted model can achieve strong generalization performance on a validation set (the validation set consists of new samples from the same classes as the training set). Relating back to the definitions of metamodel and base-model, in MAML the meta-model is effectively the initialization parameters. These parameters are used to initialize the base-model, which is then used for task-specific learning on a support set, which is then evaluated on a target set. MAML is a simple yet elegant meta-learning framework that has achieved state of the art in a number of settings. However, MAML suffers from a variety of problems which: 1) cause instability during training, 2) restrict the model's generalization performance, 3) reduce the framework's flexibility, 4) increase the system's computational overhead and 5) require that the model goes through a costly (in terms of time and computation needed) hyperparameter tuning before it can work robustly on a new task. In this paper we propose MAML++, an improved variant of the MAML framework that offers the flexibility of MAML along with many improvements, such as robust and stable training, automatic learning of the inner loop hyperparameters, greatly improved computational efficiency both during inference and training and significantly improved generalization performance. MAML++ is evaluated in the few-shot learning setting where the system is able to set a new state of the art across all established few-shot learning tasks on both Omniglot and Mini-Imagenet, performing as well as or better than all established meta learning methods on both tasks. The set-to-set few-shot learning setting BID25, neatly casts few-shot learning as a meta-learning problem. In set-to-set few-shot learning we have a number of tasks, each task is composed by a support set which is used for task-level learning, and a target set which is used for evaluating the base-model on a certain task after it has acquired task-specific (or fast) knowledge. Furthermore, all available tasks are split into 3 sets, the meta-training set, the meta-validation set and the meta-test set, used for training, validating and testing our meta-learning model respectively. Once meta-learning was shown to be an effective framework for few-shot learning and the set to set approach was introduced, further developments in few-shot learning were made in quick succession. One contribution was Matching Networks BID25. Matching networks achieve few-shot learning by learning to match target set items to support set items. More specifically, a matching network learns to match the target set items to the support set items using cosine distance and a fully differentiable embedding function. First, the support set embedding function g, parameterized as a deep neural network, embeds the support set items into embedding vectors, then the target set embedding function f embeds the target set items. Once all data-item embeddings are available, cosine distance is computed for all target set embeddings when compared to all support set embeddings. As a , for each target set item, a vector of cosine distances with respect to all support set items will be generated (with each distance's column tied to the respective support set class). Then, the softmax function is applied on the generated distance vectors, to convert them into probability distributions over the support set classes. Another notable advancement was the gradient-conditional meta-learner LSTM BID19 that learns how to update a base-learner model. At inference time, the meta-learner model applies a single update on the base-learner given gradients with respect to the support set. The fully updated base-model then computes predictions on the target set. The target set predictions are then used to compute a task loss. Furthermore they jointly learn the meta-learner's parameters as well as the base-learners initializations such that after a small number of steps it can do very well on a given task. The authors ran experiments on Mini-Imagenet where they exceed the performance of Matching Networks. In Model Agnostic meta-learning (MAML) BID7 ) the authors proposed increasing the gradient update steps on the base-model and replacing the meta-learner LSTM with Batch Stochastic Gradient Descent BID11, which as a speeds up the process of learning and interestingly improves generalization performance and achieves state of the art performance in both Omniglot and Mini-Imagenet. In Meta-SGD BID15 ) the authors proposed learning a static learning rate and an update direction for each parameter in the base-model, in addition to learning the initialization parameters of the base-model. Meta-SGD showcases significantly improved generalization performance (when compared to MAML) across all few-shot learning tasks, whilst only requiring a single inner loop update step. However this practice effectively doubles the model parameters and computational overheads of the system. Model Agnostic Meta-Learning (MAML) BID7 ) is a meta-learning framework for fewshot learning. MAML is elegantly simple yet can produce state of the art in few-shot regression/classification and reinforcement learning problems. In a sentence, MAML learns good initialization parameters for a network, such that after a few steps of standard training on a few-shot dataset, the network will perform well on that few shot task. More formally, we define the base model to be a neural network f θ with meta-parameters θ. We want to learn an initial θ = θ 0 that, after a small number N of gradient update steps on data from a support set S b to obtain θ N, the network performs well on that task's target set T b. Here b is the index of a particular support set task in a batch of support set tasks. This set of N updates steps is called the inner-loop update process. The updated base-network parameters after i steps on data from the support task S b can be expressed as: DISPLAYFORM0 where α is the learning rate, θ b i are the base-network weights after i steps towards task b, DISPLAYFORM1 ) is the loss on the support set of task b after (i − 1) (i.e. the previous step) update steps. Assuming that our task batch size is B we can define a meta-objective, which can be expressed as: DISPLAYFORM2 where we have explicitly denoted the dependence of θ b N on θ 0, given by unrolling. The objective measures the quality of an initialization θ 0 in terms of the total loss of using that initialization across all tasks. This meta objective is now minimized to optimize the initial parameter value θ 0. It is this initial θ 0 that contains the across-task knowledge. The optimization of this meta-objective is called the outer-loop update process. The ing update for the meta-parameters θ 0 can be expressed as: DISPLAYFORM3 where β is a learning rate and L T b denotes the loss on the target set for task b. In this paper we use the cross-entropy BID5 BID20 loss throughout. The simplicity, elegance and high performance of MAML make it a very powerful framework for meta-learning. However, MAML has also many issues that make it problematic to use. Training Instability: Depending on the neural network architecture and the overall hyperparameter setup, MAML can be very unstable during training as illustrated in FIG0. Optimizing the outer loop involved backpropagating derivatives through an unfolded inner loop consisting of the same network multiple times. This alone could be cause for gradient issues. However, the gradient issues are further compounded by the model architecture, which is a standard 4-layer convolutional network without skip-connections. The lack of any skip-connections means that every gradient must be passed through each convolutional layer many times; effectively the gradients will be multiplied by the same sets of parameters multiple times. After multiple back-propagation passes, the large depth structure of the unfolded network and lack of skip connections can cause gradient explosions and diminishing gradient problems respectively. Second Order Derivative Cost: Optimization through gradient update steps requires the computation of second order gradients which are very expensive to compute. The authors of MAML proposed using first-order approximations to speed up the process by a factor of three, however using these approximations can have a negative impact on the final generalization error. Further attempts at using first order methods have been attempted in Reptile BID18 where the authors apply standard SGD on a base-model and then take a step from their initialization parameters towards the parameters of the base-model after N steps. The of Reptile vary, in some cases exceeding MAML, and in others producing inferior to MAML. Approaches to reduce computation time while not sacrificing generalization performance have yet to be proposed. Absence of Batch Normalization Statistic Accumulation: A further issue that affects the generalization performance is the way that batch normalization is used in the experiments in the original MAML paper. Instead of accumulating running statistics, the statistics of the current batch were used for batch normalization. This in batch normalization being less effective, since the biases learned have to accommodate for a variety of different means and standard deviations instead of a single mean and standard deviation. On the other hand, if batch normalization uses accumulated running statistics it will eventually converge to some global mean and standard deviation. This leaves only a single mean and standard deviation to learn biases for. Using running statistics instead of batch statistics, can greatly increase convergence speed, stability and generalization performance as the normalized features will in smoother optimization landscape BID21.Shared (across step) Batch Normalization Bias: An additional problem with batch normalization in MAML stems from the fact that batch normalization biases are not updated in the inner-loop; instead the same biases are used throughout all iterations of base-models. Doing this implicitly assumes that all base-models are the same throughout the inner loop updates and hence have the same distribution of features passing through them. This is a false assumption to make, since, with each inner loop update, a new base-model is instantiated that is different enough from the previous one to be considered a new model from a bias estimation point of view. Thus learning a single set of biases for all iterations of the base-model can restrict performance. Shared Inner Loop (across step and across parameter) Learning Rate: One issue that affects both generalization and convergence speed (in terms of training iterations) is the issue of using a shared learning rate for all parameters and all update-steps. Doing so introduces two major problems. Having a fixed learning rate requires doing multiple hyperparameter searches to find the correct learning rate for a specific dataset; this process can be very computationally costly, depending on how search is done. The authors in BID15 propose to learn a learning rate and update direction for each parameter of the network. Doing so solves the issue of manually having to search for the right learning rate, and also allows individual parameters to have smaller or larger learning rates. However this approach brings its own problems. Learning a learning rate for each network parameter means increased computational effort and increased memory usage since the network contains between 40K and 50K parameters depending on the dimensionality of the data-points. Fixed Outer Loop Learning Rate: In MAML the authors use Adam with a fixed learning rate to optimize the meta-objective. Annealing the learning rate using either step or cosine functions has proven crucial to achieving state of the art generalization performance in a multitude of settings BID16 BID8 BID13 BID9. Thus, we theorize that using a static learning rate reduces MAML's generalization performance and might also be a reason for slower optimization. Furthermore, having a fixed learning rate might mean that one has to spend more (computational) time tuning the learning rate. In this section we propose methods for solving the issues with the MAML framework, described in Section 3.1. Each solution has a reference identical to the reference of the issue it is attempting to solve. Gradient Instability → Multi-Step Loss Optimization (MSL): MAML works by minimizing the target set loss computed by the base-network after it has completed all of its inner-loop updates towards a support set task. Instead we propose minimizing the target set loss computed by the base-network after every step towards a support set task. More specifically, we propose that the loss minimized is a weighted sum of the target set losses after every support set loss update. More formally: DISPLAYFORM0 Where β is a learning rate, DISPLAYFORM1 ) denotes the target set loss of task b when using the base-network weights after i steps towards minimizing the support set task and v i denotes the importance weight of the target set loss at step i, which is used to compute the weighted sum. By using the multi-step loss proposed above we improve gradient propagation, since now the basenetwork weights at every step receive gradients both directly (for the current step loss) and indirectly (from losses coming from subsequent steps). With the original methodology described in Section 3 the base-network weights at every step except the last one were optimized implicitly as a of backpropagation, which caused many of the instability issues MAML had. However using the multi-step loss alleviates this issue as illustrated in FIG0. Furthermore, we employ an annealed weighting for the per step losses. Initially all losses have equal contributions towards the loss, but as iterations increase, we decrease the contributions from earlier steps and slowly increase the contribution of later steps. This is done to ensure that as training progresses the final step loss receives more attention from the optimizer thus ensuring it reaches the lowest possible loss. If the annealing is not used, we found that the final loss might be higher than with the original formulation. Second Order Derivative Cost → Derivative-Order Annealing (DA): One way of making MAML more computationally efficient is reducing the number of inner-loop updates needed, which can be achieved with some of the methods described in subsequent sections of this report. However, in this paragraph, we propose a method that reduces the per-step computational overhead directly. The authors of MAML proposed the usage of first-order approximations of the gradient derivatives. However they applied the first-order approximation throughout the whole of the training phase. Instead, we propose to anneal the derivative-order as training progresses. More specifically, we propose to use first-order gradients for the first 50 epochs of the training phase, and to then switch to second-order gradients for the remainder of the training phase. We empirically demonstrate that doing so greatly speeds up the first 50 epochs, while allowing the second-order training needed to achieve the strong generalization performance the second-order gradients provide to the model. An additional interesting observation is that derivative-order annealing experiments showed no incidents of exploding or diminishing gradients, contrary to second-order only experiments which were more unstable. Using first-order before starting to use second-order derivatives can be used as a strong pretraining method that learns parameters less likely to produce gradient explosion/diminishment issues. Absence of Batch Normalization Statistic Accumulation → Per-Step Batch Normalization Running Statistics (BNRS): In the original implementation of the authors used only the current batch statistics as the batch normalization statistics. This, we argue, caused a variety of undesirable effects described in Section 3.1. To alleviate the issues we propose using running batch statistics for batch normalization. A naive implementation of batch normalization in the context of MAML would require sharing running batch statistics across all update steps of the inner-loop fast-knowledge acquisition process. However doing so would cause the undesirable consequence that the statistics stored be shared across all inner loop updates of the network. This would cause optimization issues and potentially slow down or altogether halt optimization, due to the increasing complexity of learning parameters that can work across various updates of the network parameters. A better alternative would be to collect statistics in a per-step regime. To collect running statistics per-step, one needs to instantiate N (where N is the total number of inner-loop update steps) sets of running mean and running standard deviation for each batch normalization layer in the network and update the running statistics respectively with the steps being taken during the optimization. The per-step batch normalization methodology should speed up optimization of MAML whilst potentially improving generalization performance. In the MAML paper the authors trained their model to learn a single set of biases for each layer. Doing so assumes that the distributions of features passing through the network are similar. However, this is a false assumption since the base-model is updated for a number of times, thus making the feature distributions increasingly dissimilar from each other. To fix this problem we propose learning a set of biases per-step within the inner-loop update process. Doing so, means that batch normalization will learn biases specific to the feature distributions seen at each set, which should increase convergence speed, stability and generalization performance. Shared Inner Loop Learning Rate (across step and across parameter) → Learning Per-Layer Per-Step Learning Rates and Gradient Directions (LSLR): Previous work in BID15 demonstrated that learning a learning rate and gradient direction for each parameter in the basenetwork improved the generalization performance of the system. However, that had the consequence of increased number of parameters and increased computational overhead. So instead, we propose, learning a learning rate and direction for each layer in the network as well as learning different learning rates for each adaptation of the base-network as it takes steps. Learning a learning rate and direction for each layer instead for each parameter should reduce memory and computation needed whilst providing additional flexibility in the update steps. Furthermore, for each learning rate learned, there will be N instances of that learning rate, one for each step to be taken. By doing this, the parameters are free to learn to decrease the learning rates at each step which may help alleviate overfitting. Fixed Outer Loop Learning Rate → Cosine Annealing of Meta-Optimizer Learning Rate (CA): In MAML the authors use a static learning rate for the optimizer of the meta-model. Annealing the learning rate, either by using step-functions BID8 or cosine functions BID16 has proved vital in learning models with higher generalization power. The cosine annealing scheduling has been especially effective in producing state of the art whilst removing the need for any hyper-parameter searching on the learning rate space. Thus, we propose applying the cosine annealing scheduling on the meta-model's optimizer (i.e. the meta-optimizer). Annealing the learning rate allows the model to fit the training set more effectively and as a might produce higher generalization performance. The datasets used to evaluate our methods were the Omniglot and Mini-Imagenet BID25 BID19 datasets. Each dataset is split into 3 sets, a training, validation and test set. The Omniglot dataset is composed of 1623 characters classes from various alphabets. There exist 20 instances of each class in the dataset. For Omniglot we shuffle all character classes and randomly select 1150 for the training set and from the remaining classes we use 50 for validation and 423 for testing. In most few-shot learning papers the first 1200 classes are used for training and the remaining for testing. However, having a small validation set to choose the best model is crucial, so we choose to use a small set of 50 classes as validation set. For each class we use all available 20 samples in the sets. Furthermore for the Omniglot dataset, data augmentation is used on the images in the form of rotations of 90 degree increments. Class samples that are rotated are considered new classes, e.g. a 180 degree rotated character C is considered a different class from a non rotated C, thus effectively having 1623 x 4 classes in total. However the rotated classes are generated dynamically after the character classes have been split into the sets such that rotated samples from a class reside in the same set (i.e. the training, validation or test set). The Mini-Imagenet dataset was proposed in BID19, it consists of 600 instances of 100 classes from the ImageNet dataset, scaled down to 84x84. We use the split proposed in BID19, which consists of 64 classes for training, 12 classes for validation and 24 classes for testing. To evaluate our methods we adopted a hierarchical hyperparameter search methodology. First we began with the baseline MAML experiments, which were ran on the 5/20-way and 1/5-shot settings on the Omniglot dataset and the 5-way 1/5-shot setting on the Mini-Imagenet dataset. Then we added each one of our 6 methodologies on top of the default MAML and ran experiments for each one separately. Once this stage was completed we combined the approaches that showed improvements in either generalization performance or convergence speed (both in terms of number of epochs and clock-time) and ran a final experiment to establish any potential gains from the combination of the techniques. An experiment consisted of training for 150 epochs, each epoch consisting of 500 iterations. At the end of each epoch, we evaluated the performance of the model on the validation set. Upon completion of all epochs, an ensemble of the top 3 performing per-epoch-models on the validation set were applied on the test set, thus producing the final test performance of the model. An evaluation ran consisted of inference on 600 unique tasks. A distinction between the training and evaluation tasks, was that the training tasks were generated dynamically continually without repeating previously sampled tasks, whilst the 600 evaluation tasks generated were identical across epochs. Thus ensuring that the comparison between models was fair, from an evaluation set viewpoint. Every experiment was repeated for 3 independent runs. The models were trained using the Adam optimizer with a learning rate of 0.001, β 1 = 0.9 and β 2 = 0.99. Furthermore, all Omniglot experiments used a task batch size of 16, whereas for the Mini-Imagenet experiments we used a task batch size of 4 and 2 for the 5-way 1-shot and 5-way 5-shot experiments respectively. Our proposed methodologies are empirically shown to improve the original MAML framework. In TAB0 one can see how our proposed approach performs on Omniglot. Each proposed methodology can individually outperform MAML, however, the most notable improvements come from the learned per-step per-layer learning rates and the per-step batch normalization methodology. In the 5-way 1-shot tasks it achieves 99.47% and in the 20-way Omniglot tasks MAML++ achieves 97.76% and 99.33% in the 1-shot and 5-shot tasks respectively. MAML++ also showcases improved convergence speed in terms of training iterations required to reach the best validation performance. Furthermore, the multi-step loss optimization technique substantially improves the training stability of the model as illustrated in FIG0. In TAB0 we also include the of our own implementation of MAML, which reproduces all except the 20-way 1-shot Omniglot case. Difficulty in replicating the specific has also been noted before in BID10. We base our on the relative performance between our own MAML implementation and the proposed methodologies. TAB1 showcases MAML++on Mini-Imagenet tasks, where MAML++ sets a new state of the art in both the 5-way 1-shot and 5-shot cases where the method achieves 52.15% and 68.32% respectively. More notably, MAML++ can achieve very strong 1-shot of 51.05% with only a single inner loop step required. Not only is MAML++ cheaper due to the usage of derivative order annealing, but also because of the much reduced inner loop steps. Another notable observation is that MAML++converges to its best generalization performance much faster (in terms of iterations required) when compared to MAML as shown in FIG0. In this paper we delve deep into what makes or breaks the MAML framework and propose multiple ways to reduce the inner loop hyperparameter sensitivity, improve the generalization error, stabilize and speed up MAML. The ing approach, called MAML++sets a new state of the art across all few-shot tasks, across Omniglot and Mini-Imagenet. The of the approach indicate that learning per-step learning rates, batch normalization parameters and optimizing on per-step target losses appears to be key for fast, highly automatic and strongly generalizable few-shot learning.
MAML is great, but it has many problems, we solve many of those problems and as a result we learn most hyper parameters end to end, speed-up training and inference and set a new SOTA in few-shot learning
909
scitldr
Community detection in graphs is of central importance in graph mining, machine learning and network science. Detecting overlapping communities is especially challenging, and remains an open problem. Motivated by the success of graph-based deep learning in other graph-related tasks, we study the applicability of this framework for overlapping community detection. We propose a probabilistic model for overlapping community detection based on the graph neural network architecture. Despite its simplicity, our model outperforms the existing approaches in the community recovery task by a large margin. Moreover, due to the inductive formulation, the proposed model is able to perform out-of-sample community detection for nodes that were not present at training time Graphs provide a natural way of representing complex real-world systems. For understanding the structure and behavior of these systems, community detection methods are an essential tool. Detecting communities allows us to analyze social networks BID9, to detect fraud BID24, to discover functional units of the brain BID8, and to predict functions of proteins BID26. Over the last decades, this problem has attracted significant attention of the research community and numerous models and algorithms have been proposed BID32. In particular, it is a well known fact that communities in real graphs are in fact overlapping BID34, thus, requiring the development of advanced models to capture this complex structure. In this regard, the advent of deep learning methods for graph-structured data opens new possibilities for designing more accurate and more scalable algorithms. Indeed, deep learning on graphs has already shown state-of-the-art in s for various graph-related tasks such as semi-supervised node classification and link prediction BID2. Likewise, a few deep learning methods for community detection in graphs have been proposed BID36 BID4. However, they all have one drawback in common: they only focus on the special case of disjoint (non-overlapping) communities. Handling overlapping communities, is a requirement not yet met by existing deep learning approaches to community detection. In this paper we propose an end-to-end deep probabilistic model for overlapping community detection in graphs. Our core idea lies in predicting the community affiliations using a graph neural network. Despite its simplicity, our model achieves state-of-the art in community recovery and significantly outperforms the existing approaches. Moreover, our model is able to perform out-of-sample (inductive) community detection for nodes that were not seen at training time. To summarize, our main contributions are:• We propose the Deep Overlapping Community detection (DOC) model -a simple, yet effective deep learning model for overlapping community detection in graphs. DOC is one of few methods is able to perform community detection both transductively and inductively.• We introduce 5 new datasets for overlapping community detection, which can act as a benchmark to stimulate future work in this research area.• We perform a thorough experimental evaluation of our model, and show its superior performance when comparing with established methods for overlapping community detection. Assume that we are given an undirected unweighted graph G = (V, E), with N:= |V| nodes and M:= |E| edges, represented by a symmetric adjacency matrix A ∈ {0, 1} N ×N. Moreover, each node is associated with D real-valued attributes, that can be represented as a matrix X ∈ R N ×D. The goal of overlapping community detection is to assign nodes in the graph into C communities. Such assignment can be represented as a non-negative community affiliation matrix F ∈ R N ×C ≥0, where F uc denotes the strength of node u's membership in community c (with the notable special case of binary hard-assignment F ∈ {0, 1} N ×C ). There is no single universally definition of community in the literature. However, most recent works tend to agree with the statement that a community is a group of nodes that have a higher probability to form edges with each other than with other nodes in the graph BID6.One can broadly subdivide the existing methods for overlapping community detection into three categories: approaches based on non-negative matrix factorization (NMF), probabilistic inference, or heuristics. Methods based on NMF are trying to recover the community affiliation matrix F by performing a low-rank decomposition of the adjacency matrix A or some other related matrix BID31 BID15. Probabilistic approaches, such as or BID37, treat F as a latent variable in a generative model for the graph, p(A, F). This way the problem of community detection is cast as an instance of probabilistic inference. Lastly, heuristic-based approaches usually define a goodness measure, like within-community edge density BID7, and then directly optimize it. All of these approaches can be very generally formulated as an optimization problem min DISPLAYFORM0 for an appropriate choice of the loss function L, be it Frobenius norm L(DISPLAYFORM1 Besides these traditional approaches, one can also view the problem of community detection through the lens of representation learning. The community affiliation matrix F can be considered as an embedding of nodes into R C ≥0, with the aim of preserving the community structure. Given the recent success of deep representation learning for graphs BID2, a question arises: "Can the advances in deep representation learning for graphs be used to design better community detection algorithms?".A very simple idea is to first apply a node embedding approach to the graph, and then cluster the nodes in the embedding space using k-means to obtain communities (as done in, e.g., BID29). However, such approach is only able to detect disjoint communities, which does not correspond to the structure of communities in real-world graphs BID34. Instead, we argue that an end-to-end deep learning architecture able to detect overlapping communities is preferable. Traditional community detection methods treat F as a free variable, with respect to which optimization is performed FIG0 ). This is similar to how embeddings are learned in methods like DeepWalk BID23 and node2vec . In contrast, recent works of BID13; BID1 have adopted the approach of defining the embeddings as a function of node attributes F:= f θ (X, A) and solving the optimization problem DISPLAYFORM2 where f θ is defined by a neural network. 1 Such formulation allows to• achieve better performance in downstream tasks like link prediction and node classification;• naturally incorporate the attributes X without hand-crafting the generative model p(X, F);• generate embeddings inductively for previously unseen nodes. We propose to use this framework for overlapping community detection, and describe our model in the next section. We let the community affiliations be produced by a three-layer graph convolutional neural network (GCN), as defined in BID14. DISPLAYFORM0 where =D DISPLAYFORM1 2 is the normalized adjacency matrix,à = A + I, andD the corresponding degree matrix. A ReLU nonlinearity is applied element-wise to the output layer to ensure nonnegativity of the community affiliation matrix F. Any other graph neural network architecture can be used here -we choose GCN because of its simplicity and popularity. Link function. A good F explains well the community structure of the graph. To model this formally, we adopt a probabilistic approach to community detection where we need to define the likelihood p(A|F). A standard assumption in probabilistic community detection is that the edges A uv are conditionally independent given the community memberships F. Thus, once the F matrix is given, every pair of nodes (u, v) produces an interaction based on their community affiliations DISPLAYFORM2. For a probabilistic interpretation, this interaction is transformed into an edge probability by means of a link function g: R ≥0 →. The edge probability is then given by DISPLAYFORM3 We consider two choices for the link function g: Bernoulli-Poisson link and sigmoid link. Bernoulli-Poisson link, defined as ξ(X uv) = 1 − exp(−X uv), is a common probabilistic model for overlapping community detection BID37 BID28. Note, that under the BP model a pair of nodes that have no communities in common (i.e. F u F T v = 0) have a zero probability of forming an edge. This is an unrealistic assumption, which can be easily fixed by adding a small offset ε > 0, that is ξ(X uv) = 1 − exp(−X uv − ε). DISPLAYFORM4 −1, is the standard choice for binary classification problems. It can also be used to convert the edge scores into probabilities in probabilistic models for graphs BID16 BID27 BID13. Since a non-negative F implies that the interactions between every pair of nodes X uv are at least 0, the edge probability under the sigmoid model is always above σ = 0.5. This can be fixed by introducing an offset: g(X uv) = σ(X uv − b). The offset b becomes an additional variable to optimize over, closedform expression for which is provided in BID25. However, while optimizing over b produces better likelihood scores, we have empirically observed that fixing it to zero leads to the same performance in community recovery (Section 4.3). Thus, we set b = 0 in our experiments. We consider both link functions, and denote the two variants of our model as DOC-BP and DOCSigmoid for Bernoulli-Poisson and sigmoid link functions respectively. Maximizing the likelihood is equivalent to minimizing the negative log-likelihood, which corresponds to the well-known binary cross-entropy loss function. Since real-world graphs are extremely sparse (only 10 −2 − 10 −5 of possible edges are present), we are dealing with an extremely imbalanced binary classification problem. A standard way of dealing with this problem is by balancing the contribution from both classes, which corresponds to the following objective function where P E and P N stand for uniform distributions over edges and non-edges respectively. DISPLAYFORM0 Evaluating the gradient of the full loss requires O(N 2) operations (since we need to compute the expectation over N 2 possible edges/non-edges). This is impractical even for moderately-sized graphs. Instead, we optimize the objective using stochastic gradient descent. That is, at every iteration we approximate ∇L using S randomly sampled edges, as well as the same number of non-edges. To summarize, we use stochastic gradient descent to optimize the objective DISPLAYFORM1 where the parameters θ are the weights of the neural network, Datasets. We perform all our experiments using the following real-world graph datasets. Facebook BID18 ) is a collection of small (100-1000 nodes) ego-networks from the Facebook graph. In our experiments we consider the 5 largest of these ego-networks (Facebook-0, Facebook-107, Facebook-1684, Facebook-1912 . DISPLAYFORM2 Larger graph datasets (1000+ nodes) with reliable ground-truth overlapping community information, and node attributes are not openly available, which hampers the evaluation of methods for overlapping community detection in attributed graphs. For this reason we have collected and preprocessed 5 real-world datasets, that satisfy these criteria and can act as future benchmarks (we will provide the datasets for download after the blind-reviewing phase). Coauthor-CS and CoauthorPhysics are subsets of the Microsoft Academic co-authorship graph, constructed based on the data from the KDD Cup 2016 2. Communities correspond to research areas in computer science and physics respectively. Reddit-Technology and Reddit-Gaming represent user-user graphs from the content-sharing platform Reddit 3. Communities correspond to subreddits -topic-specific communities that users participate in. Amazon is a segment of the Amazon co-purchase graph BID19, where product categories represent the communities. Details about how the datasets were constructed and exploratory analysis are provided in Appendix B.Model architecture. We denote the model variant with the Bernoulli-Poisson link as DOC-BP, and the model variant with the sigmoid link as DOC-Sigmoid. For all experiments we use a 3-layer GCN (Equation 3) as the basis for both models. We use the same model configuration for all other experiments, unless otherwise specified. More details about the model and the training procedure are provided in Appendix A. All reported are averaged over 10 random initializations, unless otherwise specified. As mentioned in Section 3, evaluation of the full loss (Equation 5) and its gradients is computationally prohibitive due to its O(N 2) scaling. Instead, we propose to use a stochastic approximation, that only depends on the fixed batch size S. We perform the following experiment to ensure that our training procedure converges to the same , as when using the full objective. Experimental setup. We train the the two variants of the model on the Facebook-1912 dataset, since it is small enough (N = 755) for full-batch training to be feasible. We compare the full-batch training procedure with stochastic training for different choices of the batch size S. Starting with the same initialization, we measure the respective full losses (Equation 5) over the iterations. Results. FIG0 shows training curves for batch sizes S ∈ {500, 1000, 2500, 4000, 5000}, as well as for full-batch training. As we see, the stochastic training procedure is stable. For all batch sizes the loss converges very closely to the value achieved by full-batch training. The standard way of comparing overlapping community detection algorithms is by assessing how well they can recover communities in graphs, for which the ground truth community affiliations are known. It may happen that the information used as "ground truth communities" does not correlate with the graph structure BID22. For the datasets considered in this paper, however, ground truth communities make sense both intuitively and quantitatively (see Appendix B for a more detailed discussion). Therefore, good performance in this experiment is a good indicator of the utility of an algorithm. Predicting community membership. In order to compare the detected communities to the ground truth, we first need to convert continuous community affiliations F into binary community assignments. We assign node u to community c if F uc is above a threshold ρ. We set ρ = 0.4 for DOC-BP and ρ = 0.2 for DOC-Sigmoid, as these are the values that achieve the best performance on the Facebook-1912 dataset. Metrics. We use overlapping normalized mutual information (NMI), as defined by BID20, in order to quantify the agreement of the detected communities with the ground-truth data. Baselines. We compare our method against a number of established methods for overlapping community detection. BigCLAM ) is a probabilistic model based on the Bernoulli-Poisson link that only considers the graph structure. CESNA is an extension of BigCLAM, that additionally models the generative process for node attributes. SNMF BID15 and CDE BID17 are non-negative matrix factorization approaches for overlapping community detection. We also compared against the LDense algorithm from BID7 -a heuristic-based approach, that finds communities with maximum edge density and similar attributes. However, since it achieved less than 1% NMI for 8 out of 10 datasets, we don't include the for LDense into the table. To ensure a fair comparison, all methods were given the true number of communities C. Other hyperparameters were set to their recommended values. Detailed configurations of the baselines are provided in Appendix C.Results. TAB1 shows how well different methods score in recovery of ground-truth communities. DOC-BP achieves the best or the second best score for 9 out of 10 datasets. DOC-Sigmoid achieves the best or the second best score 10 out of 10 times. This demonstrates the potential of deep learning methods for overlapping community detection. CESNA could not be run for the Amazon dataset, because it cannot handle continuous attributes. In contrast, both DOC model variants can be used with any kind of attributes out of the box. CDE was not able to process any of the graphs with N ≥ 7K nodes within 24 hours. On the other hand, both DOC-BP and DOC-Sigmoid converged in 30s-6min for all datasets except Amazon, where it took up to 20 minutes because of the dense attribute matrix. As we just saw, the DOC-BP and DOC-Sigmoid models, both based on the GCN architecture, are able to achieve superior performance in community detection. Intuitively, it makes sense to use a graph neural network (GNN) in our setting, since it allows to incorporate the attribute information and also produces similar community vectors, F u, for adjacent nodes. Nevertheless, we should ask whether it's possible achieve comparable with a simpler model. To answer this question, we consider the following two baselines. ), we use a simple fully-connected neural network to generate F. DISPLAYFORM0 This is indeed related to the model proposed by BID12. For this baseline, we use the same configuration (number and sizes of layers, training procedure, etc.) as for the GCN-based model. Same as for GCN FORMULA9 ), we optimize the parameters of the MLP, DISPLAYFORM1 Free variable (FV): As an even more simple baseline, we consider treating the community affiliations F as a free variable in optimization. DISPLAYFORM2 This is similar to standard community detection methods like BigCLAM. Since this optimization problem is rather different from those of GCN FORMULA9 ) and MLP (Equation 8), we perform additional hyperparameter optimization for the FV model. We consider different choices for the learning rate and two initialization strategies, while keeping other aspects of the training procedure as before (stochastic training, early stopping). We pick the configuration that achieved the best average NMI score across all datasets. Note that this gives a strong advantage to the FV model, since for GCN and MLP models the hyperparameters were fixed without the knowledge of the ground-truth communities. Experimental setup. We compare the NMI scores obtained by all three models, both for BernoulliPoisson and sigmoid link functions. Results. As shown in TAB2, GNN-based models outperforms the simpler baselines in 16 out of 20 cases (Remember, that the free variable version even had the advantage of picking the hyperparmeters that lead to the highest NMI scores). This highlights the fact that attribute information only is not enough for community detection, and incorporating the graph structure clearly helps to make better inferences. So far, we have observed that the DOC model is able to recover communities with high precision. What's even more interesting, since our model learns the mapping from node attributes to the producing community affiliations (Equation 3), it should also be possible to predict communities inductively for nodes that were not present at training time. Experimental setup. We hide a randomly selected fraction of nodes from each community, and train the DOC-BP and DOC-Sigmoid models on the remaining graph. Once the parameters θ are learned, we perform a forward pass of each model using the full adjacency and attribute matrix. We then compute how well the communities were predicted for the nodes that were not present during training, using NMI as a metric. We compare with the MLP model (Equation 7) as a baseline. Results. As can be seen in FIG1, both DOC-BP and DOC-Sigmoid are able to infer communities inductively for previously unseen nodes with high accuracy (NMI ≥ 40%), which is on the same level as for the transductive setting TAB2. On the other hand, MLP-BP and MLP-Sigmoid models both perform worse than the GCN-based ones, and significantly below their own scores for transductive community detection. This highlights the fact that graph-based neural network architectures provide a significant advantage for community detection. The problem of community detection in graphs is well-established in the research community, and methods such as stochastic block models BID0 and spectral methods BID30 have attracted a lot of attention. Despite the popularity of these methods, they are only suitable for detecting non-overlapping communities (i.e. partitioning the network), which is not the setting usually encountered in real-world networks BID34. Methods for overlapping community detection have been proposed BID32, but our understanding of their behavior is not as mature as for the non-overlapping methods. As discussed in Section 2, methods for OCD can be broadly divided into methods based on nonnegative matrix factorization, probabilistic inference and heuristics. These categories are not mutually exclusive, and often one method can be viewed as belonging to multiple categories. For example, the factorization-based approaches that minimize the Frobenius norm A − F F DISPLAYFORM0 More generally, most NMF and probabilistic inference models are performing a non-linear low rank decomposition of the adjacency matrix, which can be connected to the generalized principle component analysis model BID5.Deep learning for graph data can be broadly subdivided into two main categories: graph neural networks and node embeddings. Graph neural networks BID14 ) are specialized neural network architectures that can operate on graph-structured data. The goal of embedding approaches BID23; BID1 is to learn vector representations of nodes in a graph, that can later be used for other downstream machine learning tasks. One can perform k-means clustering on the node embeddings (as done in, e.g., BID29) to cluster nodes into communities. However, such approach is not able to capture the overlapping community structure present in real-world graphs. Several works have devised deep learning methods for community detection in graphs. BID36 and BID3 propose deep learning approaches that seek a low-rank decomposition of the modularity matrix BID21. This means both of these approaches are limited to finding disjoint communities, as opposed to our algorithm. Also related to our model is the approach by BID12, where they use a deep belief network to generate the community affiliation matrix. However, their neural network architecture does not use the graph, which we have shown to be crucial in Section 4.4. Lastly, BID4 designed a neural network architecture for supervised community detection. Their model learns to detect communities by training on a labeled set with community information given. This is very different from this paper, where we learn to detect communities in a fully unsupervised manner. In this work we have proposed and studied two deep models for overlapping community detection: DOC-BP, based on the Bernoulli-Poisson link, and DOC-Sigmoid, that relies on the sigmoid link function. The two variants of our model achieve state-of-the-art and convincingly outperfom existing techniques in transductive and inductive community detection tasks. Using stochastic training, both approaches are highly efficient and scale to large graphs. Among the two proposed models, DOC-BP one average performs better than the DOC-Sigmoid variant. We leave to future work to investigate the properties of communities detected by these two methods. To summarize, the obtained in our experimental evaluation provide strong evidence that deep learning for graphs deserves more attention as a framework for overlapping community detection. Architecture. We use a 3-layer graph convolutional neural network (Equation 3), with hidden sizes of 64, and the final (third) layer has size C (number of communities to be detected). Dropout with 50% keep probability is applied at every layer. We don't use any other forms of regularization, such as weight decay. Training. We train the model using Adam optimizer with default parameters. The learning rate is set to 10 −4. We use the following early stopping strategy: Before training, we set aside 1% of edges and the same number of non-edges. Every 25 gradient steps we compute the loss (Equation 5) for the validation edges and non-edges. We stop training if there was no improvement to the best validation loss for 20 × 25 = 500 iterations, or after 5000 epochs, whichever happens first. Raw Amazon data is provided by BID19 at http://jmcauley.ucsd.edu/ data/amazon/links.html.• Nodes: A node in the graph represents a product sold at Amazon. To get a graph of manageable size, we restrict our attention to the products in 14 randomly-chosen subcategories of the "Video Games" supercategory.• Edges: A pair of products (u, v) is connected by an edge if u is "also bought" with product v, or the other way around. The "also bought" information is provided in the raw data.• Communities: We treat the subcategories (e.g. "Xbox 360", "PlayStation 4") as community labels. Every product can belong to multiple categories.• Features: The authors of BID11 extracted visual features from product pictures using a deep CNN. We use these visual features as attributes X in our experiments. We use the dump of the Microsoft Academic Graph that was published for the KDD CUP 2016 competition (https://kddcup2016.azurewebsites.net/) to construct two co-authorship graphs -Coauthor-CS and Coauthor-Physics.• Nodes: A node in the graph represents a researcher.• Edges: A pair of researchers (u, v) is connected by an edge if u and v have co-authored one or more papers.• Communities: For Computer Science (Coauthor-CS), we use venues as proxies for fields of study. We pick top-5 venues for each subfield of CS according to Google Scholar (scholar.google.com). An author u is assigned to field of study c if he published at least 5 papers in venues associated with this field of study. For Physics (Coauthor-Physics), we use the Physcial Review A, B, C, D, E journals as indicators of fields of study (= communities). An author u is assigned to field of study c if he published at least 5 papers in the respective Physical Review "?" journal.• Features: For author user u we construct a histogram over keywords that were assigned to their papers. That is, the entry of the attribute matrix X ud = # of papers that author u has published that have keyword d. For this graph we had to remove from our consideration the papers that had too many (≥ 40) authors, since it led to very large fully-connected components in the ing graph. Reddit is an online content-sharing platform, where users share, rate, and comment on content on a wide range of topics. The site consists of a number of smaller topic-oriented communi-ties, called subreddits. We downloaded a dump of Reddit comments for February 2018 from http://files.pushshift.io/reddit/comments/. Using the list provided at https: //www.reddit.com/r/ListOfSubreddits/, we picked 48 gaming-related subreddits and 31 technology-based subreddits. For each of these groups of subreddits we constructed a graph as following:• Nodes: A node in the graph represents a user of Reddit, identified by their author id.• Edges: A pair of users (u, v) is connected by an edge if u and v have both commented on the same 3 or more posts.• Communities: We treat subreddits as communities. A user is assigned to a community c, if he commented on at least 5 posts posted in that community.• Features: For every user u we construct a histogram of other subreddits (excluding the subreddits used as communities) that they commented in. That is, the entry of the attribute matrix X ud = # of comments user u left on subreddit d. Co-purchase graphs, co-authorship graphs and content-sharing platforms are classic examples of networks with overlapping community structure BID34, so using these communities as ground truth is justifiable. Additionally, we show that for all the five graphs considered, the probability of connection between a pair of nodes grows monotonically with the number of shared communities. This further shows that our choice of communities makes sense. • We used the reference C++ implementations of BigCLAM and CESNA, that were provided by the authors (https://github.com/snap-stanford/snap). Models were used with the default parameter settings for step size, backtracking line search constants, and balancing terms. Since CESNA can only handle binary attributes, we binarize the original attributes (set the nonzero entries to 1) if they have a different type.• We implemented SNMF ourselves using Python. The F matrix is initialized by sampling from the Uniform distribution. We run optimization until the improvement in the reconstruction loss goes below 10 −4 per iteration, or for 300 epochs, whichever happens first.• We use the Matlab implementation of CDE provided by the authors. We set the hyperparameters to α = 1, β = 2, κ = 5, as recommended in the paper, and run optimization for 20 iterations.• We use the Python implementation of LDense provided by the authors (https:// research.cs.aalto.fi/dmg/software.shtml), and run the algorithm with the recommended parameter settings. Same as with CESNA, since the methods only supports binary attributes, we binarize the original data if necessary.
Detecting overlapping communities in graphs using graph neural networks
910
scitldr
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.
We empirically disprove a fundamental hypothesis of the widely-adopted weight sharing strategy in neural architecture search and explain why the state-of-the-arts NAS algorithms performs similarly to random search.
911
scitldr
In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified. In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders (WAE) and Variational Auto-Encoders (VAE), while benefiting from an embarrassingly simple implementation. We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets. Learning such generative models boils down to minimizing a dissimilarity measure between the data distribution and the output distribution of the generative model. To this end, and following the work of and, we approach the problem of generative modeling from the optimal transport point of view. The optimal transport problem BID38; BID22 provides a way to measure the distances between probability distributions by transporting (i.e., morphing) one distribution into another. Moreover, and as opposed to the common information theoretic dissimilarity measures (e.g., f -divergences), the p-Wasserstein dissimilarity measures that arise from the optimal transport problem: 1) are true distances, and 2) metrize a weak convergence of probability measures (at least on compact spaces). Wasserstein distances have recently attracted a lot of interest in the learning community BID11; BID14;;; BID22 due to their exquisite geometric characteristics BID34. See the supplementary material for an intuitive example showing the benefit of the Wasserstein distance over commonly used f -divergences. In this paper, we introduce a new type of auto-encoders for generative modeling (Algorithm 1), which we call Sliced-Wasserstein auto-encoders (SWAE), that minimize the sliced-Wasserstein distance between the distribution of the encoded samples and a samplable prior distribution. Our work is most closely related to the recent work by and more specifically the follow-up work by. However, our approach avoids the need to perform adversarial training in the encoding space and is not restricted to closed-form distributions, while still benefiting from a Wasserstein-like distance measure in the latent space. Calculating the Wasserstein distance can be computationally expensive, but our approach permits a simple numerical solution to the problem. Finally, we note that there has been several concurrent papers, including the work by BID10 Deshpande et al. andŞimşekli et al. (2018, that also looked into the application of sliced-Wasserstein distance in generative modeling. Regardless of the concurrent nature of these papers, our work remains novel and is distinguished from these methods. BID10 use the sliced-Wasserstein distance to match the distributions of high-dimensional reconstructed images, which require large number of slices, O(10 4), while in our method and due to the distribution matching in the latent space we only need O slices. We also note that BID10 proposed to learn discriminative slices to mitigate the need for a very large number of random projections that is in essence similar to the adversarial training used in GANs, which contradicts with our goal of not using adversarial training. Simşekli et al., on the other hand, take an interesting but different approach of parameter-free generative modeling via sliced-Wasserstein flows. Let X denote the compact domain of a manifold in Euclidean space and let x n ∈ X denote an individual input data point. Furthermore, let ρ X be a Borel probability measure defined on X. We define the probability density function p X (x) for input data x to be: DISPLAYFORM0 Let φ: X → Z denote a deterministic parametric mapping from the input space to a latent space Z (e.g., a neural network encoder). To obtain the density of the push forward of ρ X with respect to φ, i.e., ρ Z = φ * (ρ X), we use Random Variable Transformation (RVT) BID12 ). In short, the probability density function of the encoded samples z can be expressed in terms of φ and p X by: DISPLAYFORM1 where δ denotes the Dirac distribution function. Similar to variational Auto-Encoders (VAEs) BID18 and the Wasserstein Auto-Encoders (WAE), our main objective is to encode the input data points x ∈ X into latent codes z ∈ Z such that: 1) x can be recovered/approximated from z, and 2) the probability density function of the encoded samples, p Z, follows a prior distribution q Z. Let ψ: Z → X be the decoder that maps the latent codes back to the original space such that DISPLAYFORM2 where y denotes the decoded samples. It is straightforward to see that when ψ = φ −1 (i.e. ψ(φ(·)) = id(·)), the distribution of the decoder p Y and the input distribution p X are identical. Hence, in its most general form, the objective of such auto-encoders simplifies to learning φ and ψ, so that they minimize a dissimilarity measure between p Y and p X, and between p Z and q Z. In what follows, we briefly review the existing dissimilarity measures for these distributions.1.1 MINIMIZING DISSIMILARITY BETWEEN p X AND p Y We first emphasize that the VAE often assumes stochastic encoders and decoders BID18, while we consider the case of only deterministic mappings. Although, we note that, similar to WAE, SWAE can also be formulated with stochastic encoders. Different measures have been used previously to compute the dissimilarity between p X and p Y. Most notably, BID29 showed that for the general family of f -divergences, D f (p X, p Y), (including the KL-divergence, JensenShannon, etc.), using the Fenchel conjugate of the convex function f and minimizing D f (p X, p Y) leads to a min-max problem that is equivalent to the adversarial training widely used in the generative modeling literature BID13; BID26; BID27.Others have utilized the rich mathematical foundation of the OT problem and Wasserstein distances; BID14;; to define a distance between p X and p Y. In Wasserstein-GAN, utilized the Kantorovich-Rubinstein duality for the 1-Wasserstein distance, W 1 (p X, p Y), and reformulated the problem as a min-max optimization that is solved through an adversarial training scheme. Inspired by the work of and, it can be shown that (see supplementary material for a proof): DISPLAYFORM3 Furthermore, the r.h.s. of equation 3 supports a simple implementation where for i.i.d samples of the input distribution, {x n} N n=1, the upper bound can be approximated as: DISPLAYFORM4 The r.h.s of equation 3 and equation 4 take advantage of the existence of pairs x n and y n = ψ(φ(x n)), which make f (·) = ψ(φ(·)) a transport map between p X and p Y (but not necessarily the optimal transport map). In this paper, we minimize W ‡ c (p X, p Y) following equation 4 to minimize the discrepancy between p X and p Y. Next, we focus on the discrepancy measures between p Z and q Z.1.2 MINIMIZING DISSIMILARITY BETWEEN p Z AND q Z If q Z is a known distribution with an explicit formulation (e.g. Normal distribution) the most straightforward approach for measuring the (dis)similarity between p Z and q Z is the log-likelihood of z = φ(x) with respect to q Z, formally: DISPLAYFORM5 maximizing the log-likelihood is equivalent to minimizing the KL-divergence between p Z and q Z, D KL (p Z, q Z) (see supplementary material for more details and derivation of Equation equation 5). This approach has two major limitations: 1) The KL-Divergence and in general f -divergences do not provide meaningful dissimilarity measures for distributions supported on non-overlapping lowdimensional manifolds; (see supplementary material), which is common in hidden layers of neural networks, and therefore they do not provide informative gradients for training φ, and 2) we are limited to distributions q Z that have known explicit formulations, which is restrictive as it eliminates the ability to use the much broader class of samplable distributions. Various alternatives exist in the literature to address the above-mentioned limitations. These methods often sampleZ = {z j} N j=1 from q Z and Z = {z n = φ(x n)} N n=1 from p X and measure the discrepancy between these sets (i.e. point clouds). Note that there are no one-to-one correspondences betweenz j s and z n s. In their influential WAE paper, proposed two different approaches for measuring the discrepancy betweenZ and Z, namely the GAN-based and the maximum mean discrepancy (MMD)-based approaches. The GAN-based approach proposed in defines a discriminator network, D Z (p Z, q Z), to classifyz j s and z n s as coming from'true' and'fake' distributions correspondingly, and proposes a min-max adversarial optimization for learning φ and D Z. The MMD-based approach, utilizes a positive-definite reproducing kernel k: Z × Z → R to measure the discrepancy betweenZ and Z. The choice of the kernel and its parameterization, however, remain a data-dependent design parameter. An interesting alternative approach is to use the Wasserstein distance between p Z and q Z. Following the work of, this can be accomplished utilizing the Kantorovich-Rubinstein duality and through introducing a min-max problem, which leads to yet another adversarial training scheme similar to the GAN-based method in. Note that, since elements ofZ and Z are not paired, an approach similar to equation 4 could not be used to minimize the discrepancy. In this paper, we propose to use the sliced-Wasserstein metric, BID3; BID21; BID6;, to measure the discrepancy between p Z and q Z. We show that using the sliced-Wasserstein distance ameliorates the need for training an adversary network or choosing a data-dependent kernel (as in WAE-MMD), and provides an efficient, stable, and simple numerical implementation. Before explaining our proposed approach, it is worthwhile to point out the major difference between learning auto-encoders as generative models and GANs. In GANs, one needs to minimize a distance between {ψ(z j)|z j ∼ q Z } M j=1 and {x n} M n=1, which are high-dimensional point clouds for which there are no correspondences between ψ(z j)s and x n s. For the auto-encoders, on the other hand, there exists correspondences between the high-dimensional point clouds {x n} M n=1 and {y n = ψ(φ(x n))} M n=1, and the problem simplifies to matching the lower-dimensional point clouds {φ( DISPLAYFORM6 . In other words, the encoder performs a nonlinear dimensionality reduction, that enables us to solve a simpler problem compared to GANs. Next we introduce the details of our approach. In what follows we first provide a brief review of the necessary equations to understand the Wasserstein and sliced-Wasserstein distances and then present our Sliced Wasserstein auto-encoder (SWAE). The Wasserstein distance between probability measures ρ X and ρ Y, with corresponding densities dρ X = p X (x)dx and dρ Y = p Y (y)dy is defined as: DISPLAYFORM0 where DISPLAYFORM1 is the set of all transportation plans (i.e. joint measures) with marginal densities p X and p Y, and c: X × Y → R + is the transportation cost. equation 6 is known as the Kantorovich formulation of the optimal mass transportation problem, which seeks the optimal transportation plan between p X and p Y. If there exist diffeomorphic mappings, f: X → Y (i.e. transport maps) such that y = f (x) and consequently, DISPLAYFORM2 where det(D·) is the determinant of the Jacobian, then the Wasserstein distance could be defined based on the Monge formulation of the problem (see BID38 and BID22) as: DISPLAYFORM3 where M P is the set of all diffeomorphisms that satisfy equation 7. As can be seen from equation 6 and equation 8, obtaining the Wasserstein distance requires solving an optimization problem. We note that various efficient optimization techniques have been proposed in the past (e.g. Cuturi FORMULA1 ; BID36 ; Oberman & Ruan FORMULA1) to solve this optimization. For one-dimensional probability densities, p X and p Y, however, the Wasserstein distance has a closed-form solution. Let P X and P Y be the cumulative distributions of one-dimensional probability distributions p X and p Y, correspondingly. The Wassertein distance can then be calculated as below (see BID22 for more details): DISPLAYFORM4 This closed-form solution motivates the definition of sliced-Wasserstein distances. Sliced-Wasserstein distance has similar qualitative properties to the Wasserstein distance, but it is much easier to compute. The sliced-Wasserstein distance was used in to calculate barycenter of distributions and point clouds. BID3 provided a nice theoretical overview of barycenteric calculations using the sliced-Wasserstein distance. BID21 used it to define positive definite kernels for distributions BID6 to define a kernel for persistence diagrams. Sliced-Wasserstein was recently used for learning Gaussian mixture models in, and it was also used as a measure of goodness of fit for GANs in BID17.The main idea behind the sliced-Wasserstein distance is to slice (i.e., project) higher-dimensional probability densities into sets of one-dimensional marginal distributions and compare these marginal distributions via the Wasserstein distance. The slicing/projection process is related to the field of Integral Geometry and specifically the Radon transform (see BID15). The relevant to our discussion is that a d-dimensional probability density p X can be uniquely represented as the set of its one-dimensional marginal distributions following the Radon transform and the Fourier slice theorem BID15. These one dimensional marginal distributions of p X are defined as: DISPLAYFORM0 where S d−1 is the d-dimensional unit sphere. Note that for any fixed θ ∈ S d−1, Rp X (·; θ) is a one-dimensional slice of distribution p X. In other words, Rp X (·; θ) is a marginal distribution of p X that is obtained from integrating p X over the hyperplane orthogonal to θ. Utilizing these marginal distributions in equation 10, the sliced Wasserstein distance could be defined as: DISPLAYFORM1 Given that Rp X (·; θ) and Rp Y (·; θ) are one-dimensional, the Wasserstein distance in the integrand has a closed-form solution (see equation 9). Moreover, it can be shown that SW c is a true metric BID4 and BID20 ), and it induces the same topology as W c, at least on compact sets BID34. A natural transportation cost that has extensively studied in the past is the 2 2, c(x, y) = x − y 2 2, for which there are theoretical guarantees on existence and uniqueness of transportation plans and maps (see BID34 and BID38). When c(x, y) = x − y p p for p ≥ 2, the following upper bound hold for the SW distance: DISPLAYFORM2 where, DISPLAYFORM3 Chapter 5 in BID4 proves this inequality. In our paper, we are interested in p = 2, for which α p,d = 1 d, and we have: DISPLAYFORM4 In the Numerical Implementation Section, we provide a numerical experiment to compare W 2 and SW 2, that confirms the above equation. Our proposed formulation for the SWAE is as follows: DISPLAYFORM0 where φ is the encoder, ψ is the decoder, p X is the data distribution, p Y is the data distribution after encoding and decoding (equation 2), p Z is the distribution of the encoded data (equation 1), q Z is a predefined samplable distribution, and λ indicates the relative importance of the loss functions. To further clarify why we use the sliced-Wasserstein distance to measure the difference between p Z and q Z, we reiterate that due to the lack of correspondences betweenz i s and z j s, one cannot minimize the upper-bound in equation 4, and calculation of the Wasserstein distance requires an additional optimization step to obtain the optimal coupling between p Z and q Z. To avoid this additional optimization, while maintaining the favorable characteristics of the Wasserstein distance, we use the sliced-Wasserstein distance to measure the discrepancy between p Z and q Z. We now describe the numerical details of our approach. The Wasserstein distance between two one-dimensional probability densities p X and p Y is obtained from equation 9. The integral in equation 9 can be numerically estimated using the midpoint Riemann sum, DISPLAYFORM0 2M (see FIG0). In scenarios where only samples from the distributions are available, x m ∼ p X and y m ∼ p Y, the empirical densities can be estimated as DISPLAYFORM1 δ ym, where δ xm is the Dirac delta function centered at x m. Therefore the corresponding empirical distribution function of p X is P X (t) ≈ P X,M (t) = 1 M M m=1 u(t−x m) where u is the step function (P Y,M (t) is defined similarly). From Glivenko-Cantelli Theorem we have that sup t |P X,M (t) − P X (t)| a.s.− − → 0, where the convergence behavior is achieved via Dvoretzky-Kiefer-Wolfowitz inequality bound: DISPLAYFORM2 2 ). Calculating the Wasserstein distance with the empirical distribution function is computationally attractive. Sorting x m s in an ascending order, such that DISPLAYFORM3 and where i[m] is the index of the sorted x m s, it is straightforward to see that P −1 FIG0. The Wasserstein distance can be approximated by first sorting x m s and y m s and then calculating: We need to address one final question here. How well does equation 15 approximate the Wasserstein distance, W c (p X, p Y)? We first note that the rates of convergence of empirical distributions, for the p-Wasserstein metric (i.e., c(x, y) = |x − y| p ) of order p ≥ 1, have been extensively studied in the mathematics and statistics communities (see for instance BID2 and BID9). A detailed description of these rates is, however, beyond the scope of this paper, especially since these rates are dependent on the choice of p. In short, for p = 1 it can be shown that DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 where C is an absolute constant. Similar are achieved for E(Wp(pX,M, p X)) and (E(W p p (p X,M, p X))) 1 p, although under more strict assumptions on p X (i.e., slightly stronger assumptions than having a finite second moment). Using the triangle inequality together with the convergence rates of empirical distributions with respect to the p-Wasserstein distance, see BID2, for W 1 (p X,M, p X) (or more generally W p (p X,M, p X)) we can show that (see supplementary material): DISPLAYFORM7 for some absolute constant, C. We reiterate that similar bounds could be found for W p although with slightly more strict assumptions on p X and p Y. In scenarios where only samples from the d-dimensional distribution, p X, are available, x m ∼ p X, the empirical density can be estimated as p X,M = 1 M M m=1 δ xm. Following equation 10 it is straightforward to show that the marginal densities (i.e. slices) are obtained from: DISPLAYFORM0 see the supplementary material for a proof. The Dvoretzky-Kiefer-Wolfowitz upper bound holds for Rp X (t, θ) and Rp X,M (t, θ). Minimizing the sliced-Wasserstein distance (i.e., as in the second term of 14) requires an integration over the unit sphere in R d, i.e., S d−1. In practice, this integration is approximated by using a simple Monte Carlo scheme that draws uniform samples from S d−1 and replaces the integral with a finite-sample average, DISPLAYFORM0 Figure 2: SW approximations (scaled by 1.22 DISPLAYFORM1, and different number of random slices, L. Moreover, the global minimum for SW c (p Z, q Z) is also a global minimum for each DISPLAYFORM2 A fine sampling of S d−1, however, is required for a good approximation of SW c (p Z, q Z). Intuitively, if p Z and q Z are similar, then their projections with respect to any finite subset of S d−1 would also be similar. This leads to a stochastic gradient descent scheme where in addition to the random sampling of the input data, we also random sample the projection angles from S d−1.A natural question arises on the effect of the number of random slices, L = |Θ|, on the approximation of the SW distance. Here, we devised a simple experiment that demonstrates the effect of L on approximating the SW distance. We generated two random multi-variate Gaussian distributions in a d-dimensional space, where d ∈ {2 n} 10 n=1, to serve as p X = N (µ X, Σ X) and p X = N (µ Y, Σ Y). The Wasserstein distance for the two Gaussian distributions has a closed form solution, DISPLAYFORM3 which served as the ground-truth distance between the distributions. We then measured the SW distance between M = 1000 samples generated from the two Gaussian distributions using L ∈ {1, 10, 50, 100, 500, 1000} random slices. We repeated the experiment for each L and d, a thousand times and report the means and standard deviations in Figure 2. Following equation 13 we scaled the SW distance by √ d. Moreover we found out empirically that 1.22 DISPLAYFORM4 It can be seen from Figure 2 that the expected value of the scaled SW -distance closely follows the true Wasserstein distance. A more interesting observation is that the variance of estimation increases for higher dimensions d and decreases as the number of random projections, L, increases. Hence, calculating the SW distance in the image space, as in BID10, requires a very large number of projections L to get a less variant approximation of the distance. be randomly sampled from a uniform distribution on S d−1. Then using the numerical approximations described in this section, the loss function in equation 14 can be rewritten as: The steps of our proposed method are presented in Algorithm 1. It is worth pointing out that sorting is by itself an optimization problem (which can be solved very efficiently), and therefore the sorting followed by the gradient descent update on φ and ψ is in essence a min-max problem, which is being solved in an alternating fashion. Finally, we point out that each iteration of SWAE costs O(LM log(M)) operations. DISPLAYFORM5 Require: Regularization coefficient λ, and number of random projections, L. Initialize the parameters of the encoder, φ, and decoder, ψ while φ and ψ have not converged do Sample {x1, ..., xM} from training set (i.e. pX) Sample {z1, ...,zM} from qZ Sample {θ1, ..., θL} from S DISPLAYFORM0 Update φ and ψ by descending: DISPLAYFORM1 In our experiments we used three image datasets, namely the MNIST dataset by BID24, the CelebFaces Attributes Dataset (CelebA) by BID25, and the LSUN Bedroom Dataset by BID41. For the MNIST dataset we used a simple auto-encoder with mirrored classic deep convolutional neural networks with 2D average poolings, leaky rectified linear units (Leaky-ReLu) as the activation functions, and upsampling layers in the decoder. For the CelebA and LSUN datasets we used the architecture similar to.To test the capability of our proposed algorithm in shaping the latent space of the encoder, we started with the MNIST dataset and trained SWAE to encode this dataset to a two-dimensional latent space (for the sake of visualization) while enforcing a match between p X and p Y and p Z and q Z. We chose four different samplable distributions as shown in FIG5. It can bee seen that SWAE can successfully embed the dataset into the latent space while enforcing p Z to closely follow q Z. In addition, we sample the two-dimensional latent spaces on a 25 × 25 grid in [−1, 1] 2 and decode these points to visualize their corresponding images in the digit/image space. To get a sense of the convergence behavior of SWAE, and similar to the work of BID17, we calculate the Sliced Wasserstein distance between p Z and q Z as well as p X and p Y at each batch iteration where we used p-LDA BID39 to calculate projections (See supplementary material). We compared the convergence behavior of SWAE with the closest related work, (specifically WAE-GAN) where an adversarial training is used to match p Z to q Z, while the loss function for p X and p Y remains exactly the same between the two methods. We repeated the experiments 100 times and report the summary of in FIG2. We mention that the exact same models and optimizers were used for both methods in this experiment. An interesting observation, here is that while WAE-GAN provides good or even slightly better generated random samples for MNIST (lower sliced-Wasserstein distance between p X and p Y), it fails to provide a good match between p Z and q Z for the choice of the prior distribution reported in FIG2. This phenomenon seems to be related to the mode-collapse problem of GANs, where the adversary fails to sense that the distribution is not fully covered. Finally, in our experiments we did not notice a significant difference between the computational time for SWAE and WAE-GAN. For the MNIST experiment and on a single NVIDIA Tesla P 100 GPU, each batch iteration (batchsize=500) of WAE-GAN took 0.2571 ± 0.0435(sec) while SWAE (with L = 50 projections) took 0.2437 ± 0.0391(sec). DISPLAYFORM0 The distribution in the 64-dimensional latent space, q Z, was set to Normal. We also report the negative log-likelihood of {z i = φ(x i)} with repect to q Z for 1000 testing samples for both datasets. We did not use Nowizin's trick for the GAN models. DISPLAYFORM1 True Data 2 3 Table 2: FID score statistics (N = 5) at final iteration of training. Lower is better. Scores were computed with 10 4 random samples from the testing set against an equivalent amount of generated samples. The CelebA face and the LSUN bedroom datasets contain higher degrees of variations compared to the MNIST dataset and therefore a two-dimensional latent-space does not suffice to capture the variations in these datasets (See supplementary material for more details on the dimensionality of the latent space). We used a K = 64 dimensional latent spaces for both the CelebA and the LSUN Bedroom datasets, and also used a larger auto-encoder (i.e., DCGAN, following the work of). For these datasets SWAE was trained with q Z being the Normal distribution to enable the calculation of the negative log likelihood (NLL). TAB1 shows the comparison between SWAE and WAE for these two datasets. We note that all experimental parameters were kept the same to enable an apples to apples comparison. Finally, FIG4 demonstrates the interpolation between two sample points in the latent space, i.e. ψ(tφ(I 0) + (1 − t)φ(I 1)) for t ∈, for all three datasets. We introduced Sliced Wasserstein auto-encoders (SWAE), which enable one to shape the distribution of the encoded samples to any samplable distribution without the need for adversarial training or having a likelihood function specified. In addition, we provided a simple and efficient numerical scheme for this problem, which only relies on few inner products and sorting operations in each SGD iteration. We further demonstrated the capability of our method on three image datasets, namely the MNIST, the CelebA face, and the LSUN Bedroom datasets, and showed competitive performance, in the sense of matching distributions p Z and q Z, to the techniques that rely on additional adversarial trainings. Finally, we envision SWAE could be effectively used in transfer learning and domain adaptation algorithms where q Z comes from a source domain and the task is to encode the target domain p X in a latent space such that the distribution follows the distribution of the target domain. Figure 4: Sample convergence behavior for our method compared to the WAE-GAN, where q Z is set to a ring distribution FIG5, top left). The columns represent batch iterations (batchsize= 500).The top half of the table shows of ψ(z) for z ∼ q Z, and the bottom half shows z ∼ q Z and φ(x) for x ∼ p X. It can be seen that the adversarial loss in the latent space does not provide a full coverage of the distribution, which is a similar problem to the well-known'mode collapse' problem in the GANs. It can be seen that SWAE provides a superior match between p Z and q Z while it does not require adversarial training. Following the example by and later here we show a simple example comparing the Jensen-Shannon divergence with the Wasserstein distance. First note that the Jensen-Shannon divergence is defined as, DISPLAYFORM0 where DISPLAYFORM1 q(x) )dx is the Kullback-Leibler divergence. Now consider the following densities, p(x) be a uniform distribution around zero and let q τ (x) = p(x − τ) be a shifted version of the p. FIG6 show W 1 (p, q τ) and JS(p, q τ) as a function of τ. As can be seen the JS divergence fails to provide a useful gradient when the distributions are supported on non-overlapping domains. To maximize (minimize) the similarity (dissimilarity) between p Z and q Z, we can write: DISPLAYFORM0 where we replaced p Z with equation 1. Furthermore, it is straightforward to show: DISPLAYFORM1 The Wasserstein distance between the two probability measures ρ X and ρ Y with respective densities p X and p Y, can be measured via the Kantorovich formulation of the optimal mass transport problem: DISPLAYFORM2 c(x, y)γ(x, y)dxdy DISPLAYFORM3 } is the set of all transportation plans (i.e., couplings or joint distributions) over p X and p Y. Now, note that the two step process of encoding p X into the latent space Z and decoding it to p Y, provides a unique decomposition of γ as γ 0 (x, y) = δ(y − ψ(φ(x)))p X (x) ∈ Γ. The optimal coupling (i.e., transport plan) between p X and p Y could be equal or different from γ(x, y) = δ(y −ψ(φ(x)))p X (x). This leads to the scenario on the right where DISPLAYFORM4 Therefore we can write: DISPLAYFORM5 which proves equation 3. Finally, taking the infimum of the two sides of the inequality, with respect to φ and ψ, we have: DISPLAYFORM6 is non-zero. Finally, we note that ψ(φ(·)) = id(·) is a global optima for both W c (p X, p Y) and W ‡ c (p X, p Y). Following equation 10 a distribution can be sliced via: Figure 9 demonstrates the outputs of trained SWAEs with K = 2 and K = 128 for sample input images. The input images were resized to 64 × 64 and then fed to our auto-encoder structure. This effect can also be seen for the MNIST dataset as shown in FIG0. When the dimensionality of the latent-space (i.e. information bottleneck) is too low the latent space will not contain enough information to reconstruct crisp images. Increasing the dimensionality of the latent space leads to crisper images. DISPLAYFORM0 In this paper we also used the sliced Wasserstein distance as a measure of goodness of fit (for convergence analysis). To provide a fair comparison between different methods, we avoided random projections for this comparison. Instead, we calculated a discriminant subspace to separate ψ(z) from ψ(φ(x)) for z ∼ q Z and x ∼ p X, and set the projection parameters θs to the calculated discriminant components. This will lead to only slices that contain discriminant information. We point out that the linear discriminant analysis (LDA) is not a good choice for this task as it only leads to one discriminant component (because we only have two classes). We used the penalized linear discriminant analysis (p-LDA) that utilizes a combination of LDA and PCA. In short, p-LDA solves the following objective function:argmax θ θ T S T θ θ T (S W + αI)θ s.t. θ = 1 where S W is the within class covariance matrix, S T is the data covariance matrix, I is the identity matrix, and α identifies the interpolation between PCA and LDA (i.e. α = 0 leads to LDA and α → ∞ leads to PCA). For p ≥ 1 we can use the triangle inequality and write DISPLAYFORM0 which leads to DISPLAYFORM1 Taking the expectation of both sides of the inequality and using the empirical convergence bounds of W p (in this case W 1) we have, DISPLAYFORM2 for some absolute constant C, where the last line comes from the empirical convergence bounds of distributions with respect to the Wasserstein distance, see BID2.
In this paper we use the sliced-Wasserstein distance to shape the latent distribution of an auto-encoder into any samplable prior distribution.
912
scitldr
The Hamiltonian formalism plays a central role in classical and quantum physics. Hamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time. These properties are important for many machine learning problems - from sequence prediction to reinforcement learning and density modelling - but are not typically provided out of the box by standard tools such as recurrent neural networks. In this paper, we introduce the Hamiltonian Generative Network (HGN), the first approach capable of consistently learning Hamiltonian dynamics from high-dimensional observations (such as images) without restrictive domain assumptions. Once trained, we can use HGN to sample new trajectories, perform rollouts both forward and backward in time, and even speed up or slow down the learned dynamics. We demonstrate how a simple modification of the network architecture turns HGN into a powerful normalising flow model, called Neural Hamiltonian Flow (NHF), that uses Hamiltonian dynamics to model expressive densities. Hence, we hope that our work serves as a first practical demonstration of the value that the Hamiltonian formalism can bring to machine learning. More and video evaluations are available at: http://tiny.cc/hgn Figure 1: The Hamiltonian manifold hypothesis: natural images lie on a low-dimensional manifold in pixel space, and natural image sequences (such as one produced by watching a two-body system, as shown in red) correspond to movement on the manifold according to Hamiltonian dynamics. Any system capable of a wide range of intelligent behaviours within a dynamic environment requires a good predictive model of the environment's dynamics. This is true for intelligence in both biological (; 2010;) and artificial (; ; ;) systems. Predicting environmental dynamics is also of fundamental importance in physics, where Hamiltonian dynamics and the structurepreserving transformations it provides have been used to unify, categorise and discover new physical entities . Hamilton's fundamental was a system of two first-order differential equations that, in a stroke, unified the predictions made by prior Newtonian and Lagrangian mechanics (Hamilton, 1834). After well over a century of development, it has proven to be essential for parsimonious descriptions of nearly all of physics. Hamilton's equations provide a way to predict a system's future behavior from its current state in phase space (that is, its position and momentum for classical Newtonian systems, and its generalized position and momentum more broadly). Hamiltonian mechanics induce dynamics with several nice properties: they are smooth, they include paths along which certain physical quantities are conserved (symmetries) and their time evolution is fully reversible. These properties are also useful for machine learning systems. For example, capturing the time-reversible dynamics of the world state might be useful for agents attempting to account for how their actions led to effects in the world; recovering an Figure 2: Hamiltonian Generative Network schematic. The encoder takes a stacked sequence of images and infers the posterior over the initial state. The state is rolled out using the learnt Hamiltonian. Note that we depict Euler updates of the state for schematic simplicity, while in practice this is done using a leapfrog integrator. For each unroll step we reconstruct the image from the position q state variables only and calculate the reconstruction error. abstract low-dimensional manifold with paths that conserve various properties is tightly connected to outstanding problems in representation learning (see e.g. for more discussion); and the ability to conserve energy is related to expressive density modelling in generative approaches . Hence, we propose a reformulation of the well-known image manifold hypothesis by extending it with a Hamiltonian assumption (illustrated in Fig. 1): natural images lie on a low-dimensional manifold embedded within a high-dimensional pixel space and natural sequences of images trace out paths on this manifold that follow the equations of an abstract Hamiltonian. Given the rich set of established tools provided by Hamiltonian descriptions of system dynamics, can we adapt these to solve outstanding machine learning problems? When it comes to adapting the Hamiltonian formalism to contemporary machine learning, two questions need to be addressed: 1) how should a system's Hamiltonian be learned from data; and 2) how should a system's abstract phase space be inferred from the high-dimensional observations typically available to machine learning systems? Note that the inferred state may need to include information about properties that play no physical role in classical mechanics but which can still affect their behavior or function, like the colour or shape of an object. The first question was recently addressed by the Hamiltonian Neural Network (HNN) approach, which was able to learn the Hamiltonian of three simple physical systems from noisy phase space observations. However, to address the second question, HNN makes assumptions that restrict it to Newtonian systems and appear to limit its ability to scale to more challenging video datasets. In this paper we introduce the first model that answers both of these questions without relying on restrictive domain assumptions. Our model, the Hamiltonian Generative Network (HGN), is a generative model that infers the abstract state from pixels and then unrolls the learned Hamiltonian following the Hamiltonian equations . We demonstrate that HGN is able to reliably learn the Hamiltonian dynamics from noisy pixel observations on four simulated physical systems: a pendulum, a mass-spring and two-and three-body systems. Our approach outperforms HNN by a significant margin. After training, we demonstrate that HGN produces meaningful samples with reversible dynamics and that the speed of rollouts can be controlled by changing the time derivative of the integrator at test time. Finally, we show that a small modification of our architecture yields a flexible, normalising flow-based generative model that respects Hamiltonian dynamics. We show that this model, which we call Neural Hamiltonian Flow (NHF), inherits the beneficial properties of the Hamiltonian formalism (including volume preservation) and is capable of expressive density modelling, while offering computational benefits over standard flow-based models. Most machine learning approaches to modeling dynamics use discrete time steps, which often in an accumulation of the approximation errors when producing rollouts and, therefore, to a fast drop in accuracy. Our approach, on the other hand, does not discretise continuous dynamics and models them directly using the Hamiltonian differential equations, which leads to slower divergence for longer rollouts. The density model version of HGN (NHF) uses the Hamiltonian dynamics as normalising flows along with a numerical integrator, making our approach somewhat related to the recently published neural ODE work ). What makes our approach different is that Hamiltonian dynamics are both invertible and volume-preserving (as discussed in Sec. 3.3), which makes our approach computationally cheaper than the alternatives and more suitable as a model of physical systems and other processes that have these properties. Also related is recent work attempting to learn a model of physical system dynamics end-to-end from image sequences using an autoencoder (de). Unlike our work, this model does not exploit Hamiltonian dynamics and is trained in a supervised or semi-supervised regime. One of the most comparable approaches to ours is the Hamiltonian Neural Network (HNN) . This work, done concurrently to ours, proposes a way to learn Hamiltonian dynamics from data by training the gradients of a neural network (obtained by backpropagation) to match the time derivative of a target system in a supervised fashion. In particular, HNN learns a differentiable function H(q,p) that maps a system's state (its position, q, and momentum, p) to a scalar quantity interpreted as the system's Hamiltonian. This model is trained so that H(p,q) satisfies the Hamiltonian equation by minimizing where the derivatives ∂H ∂q and ∂H ∂p are computed by backpropagation. Hence, this learning procedure is most directly applicable when the true state space (in canonical coordinates) and its time derivatives are known. Accordingly, in the majority of the experiments presented by the authors, the Hamiltonian was learned from the ground truth state space directly, rather than from pixel observations. The single experiment with pixel observations required a modification of the model. First, the input to the model became a concatenated, flattened pair of images o t = [x t,x t+1], which was then mapped to a low-dimensional embedding space z t = [q t,p t] using an encoder neural network. Note that the dimensionality of this embedding (z ∈ R 2 in the case of the pendulum system presented in the paper) was chosen to perfectly match the ground truth dimensionality of the phase space, which was assumed to be known a priori. This, however, is not always possible. The latent embedding was then treated as an estimate of the position and the momentum of the system depicted in the images, where the momentum was assumed to be equal to the velocity of the system -an assumption enforced by the additional constraint found necessary to encourage learning, which encouraged the time derivative of the position latent to equal the momentum latent using finite differences on the split latents: This assumption is appropriate in the case of the simple pendulum system presented in the paper, however it does not hold more generally. Note that our approach does not make any assumptions on the dimensionality of the learned phase space, or the form of the momenta coordinates, which makes our approach more general and allows it to perform well on a wider range of image domains as presented in Sec. 4. Figure 3: A: standard normalising flow, where the invertible function f i is implemented by a neural network. B: Hamiltonian flows, where the initial density is transformed using the learned Hamiltonian dynamics. Note that we depict Euler updates of the state for schematic simplicity, while in practice this is done using a leapfrog integrator. The Hamiltonian formalism describes the continuous time evolution of a system in an abstract phase space s = (q, p) ∈ R 2n, where q ∈ R n is a vector of position coordinates, and p ∈ R n is the corresponding vector of momenta. The time evolution of the system in phase space is given by the Hamiltonian equations: where the Hamiltonian H: R 2n → R maps the state s = (q,p) to a scalar representing the energy of the system. The Hamiltonian specifies a vector field over the phase space that describes all possible dynamics of the system. For example, the Hamiltonian for an undamped mass-spring system is, where m is the mass, q ∈ R 1 is its position, p ∈ R 1 is its momentum and k is the spring stiffness coefficient. The Hamiltonian can often be expressed as the sum of the kinetic T and potential V energies H = T (p)+V (q), as is the case for the mass-spring example. Identifying a system's Hamiltonian is in general a very difficult problem, requiring carefully instrumented experiments and researcher insight produced by years of training. In what follows, we describe a method for modeling a system's Hamiltonian from raw observations (such as pixels) by inferring a system's state with a generative model and rolling it out with the Hamiltonian equations. Our goal is to build a model that can learn a Hamiltonian from observations. We assume that the data } comes in the form of high-dimensional noisy observations, where each x i = G(s i) = G(q i) is a non-deterministic function of the generalised position in the phase space, and the full state is a non-deterministic function of a sequence of images since the momentum (and hence the full state) cannot in general be recovered from a single observation. Our goal is to infer the abstract state and learn the Hamiltonian dynamics in phase space by observing K motion sequences, discretised into T +1 time steps each. In the process, we also want to learn an approximation to the generative process G(s) in order to be able to move in both directions between the high dimensional observations and the low-dimensional abstract phase space. Although the Hamiltonian formalism is general and does not depend on the form of the observations, we present our model in terms of visual observations, since many known physical Hamiltonian systems, like a mass-spring system, can be easily observed visually. In this section we introduce the Hamiltonian Generative Network (HGN), a generative model that is trained to behave according to the Hamiltonian dynamics in an abstract phase space learned from raw observations of image sequences. HGN consists of three parts (see Fig. 2): an inference network, a Hamiltonian network and a decoder network, which are discussed next. The inference network takes in a sequence of images (x i 0,...x i T), concatenated along the channel dimension, and outputs a posterior over the initial state z ∼ q φ (·|x 0,...x T), corresponding to the system's coordinates in phase space at the first frame of the sequence. We parametrise q φ (z) as a diagonal Gaussian with a unit Gaussian prior p(z) = N (0, I) and optimise it using the usual reparametrisation trick . To increase the expressivity of the abstract phase Figure 4: A schematic representation of NHF which can perform expressive density modelling by using the learned Hamiltonians as normalising flows. Note that we depict Euler updates of the state for schematic simplicity, while in practice this is done using a leapfrog integrator. space s 0, we map samples from the posterior with another function s 0 = f ψ (z) to obtain the system's initial state. As mentioned in Sec. 3.1, the Hamiltonian function expects the state to be of the form s = (q,p), hence we initialise s 0 ∈ R 2n and arbitrarily assign the first half of the units to represent abstract position q and the other half to represent abstract momentum p. The Hamiltonian network is parametrised as a neural network with parameters γ that takes in the inferred abstract state and maps it to a scalar H γ (s t) ∈ R. We can use this function to do rollouts in the abstract state space using the Hamiltonian equations (Eq. 3), for example by Euler integration: In this work we assume a separable Hamiltonian, so in practice we use a more sophisticated leapfrog integrator to roll out the system, since it has better theoretical properties and in better performance in practice (see Sec. A.6 in Supplementary Materials for more details). The decoder network is a standard deconvolutional network (we use the architecture from) that takes in a low-dimensional representation vector and produces a high-dimensional pixel reconstruction. Given that each instantaneous image does not depend on the momentum information, we restrict the decoder to take only the position coordinates of the abstract state as input: The objective function. Given a sequence of T +1 images, HGN is trained to optimise the following objective: which can be seen as a temporally extended variational autoencoder (VAE) objective, consisting of a reconstruction term for each frame, and an additional term that encourages the inferred posterior to match a prior. The key difference with a standard VAE lies in how we generate rollouts -these are produced using the Hamiltonian equations of motion in learned Hamiltonian phase space. In this section, we describe how the architecture described above can be modified to produce a model for flexible density estimation. Learning computationally feasible and accurate estimates of complex densities is an open problem in generative modelling. A common idea to address this problem is to start with a simple prior distribution π(u) and then transform it into a more expressive form p(x) through a series of composable invertible transformations f i (u) called normalising flows (see Fig. 3A). Sampling can then be done according to x = f T •... •f 1 (u), where u ∼ π(·). Density evaluation, however, requires more expensive computations of both inverting the flows and calculating the determinants of their Jacobians. For a single flow step, this equates to the Pendulum 2-& 3-Body Problem Mass-spring Figure 5: Ground truth Hamiltonians and samples from generated datasets for the ideal pendulum, mass-spring, and two-and three-body systems used to train HGN.. While a lot of work has been done recently into proposing better alternatives for flow-based generative models in machine learning (; ; ; ; ; ; ; ;), none of the approaches manage to produce both sampling and density evaluation steps that are computationally scalable. The two requirements for normalising flows are that they are invertible and volume preserving, which are exactly the two properties that Hamiltonian dynamics possess. This can be seen by computing the determinant of the Jacobian of the infinitesimal transformation induced by the Hamiltonian H. By Jacobi's formula, the derivative of the determinant at the identity is the trace and: where i = j are the off-diagonal entries of the determinant of the Jacobian. Hence, in this section we describe a simple modification of HGN that allows it to act as a normalising flow. We will refer to this modification as the Neural Hamiltonian Flow (NHF) model. First, we assume that the initial state s 0 is a sample from a simple prior s 0 ∼ π 0 (·). We then chain several Hamiltonians H i to transform the sample to a new state s T = H T •...•H 1 (s 0) which corresponds to a sample from the more expressive final density s T ∼ p(x) (see Fig. 3B for an illustration of a single Hamiltonian flow). Note that unlike HGN, where the Hamiltonian dynamics are shared across time steps (a single Hamiltonian is learned and its parameters are shared across time steps of a rollout), in NHF each step of the flow (corresponding to a single time step of a rollout) can be parametrised by a different Hamiltonian. The inverse of such a Hamiltonian flow can be easily obtained by replacing dt by −dt in the Hamiltonian equations and reversing the order of the transformations, T (s T) (we will use the appropriate dt or −dt superscript from now on to make the direction of integration of the Hamiltonian dynamics more explicit). The ing density p(s T) is given by the following equation: Our proposed NHF is more computationally efficient that many other flow-based approaches, because it does not require the expensive step of calculating the trace of the Jacobian. Hence, the NHF model constitutes a more structured form of a Neural ODE flow, but with a few notable differences: (i) The Hamiltonian ODE is volume-preserving, which makes the computation of loglikelihood cheaper than for a general ODE flow. (ii) General ODE flows are only invertible in the limit dt → 0, whereas for some Hamiltonians we can use more complex integrators (like the symplectic leapfrog integrator described in Sec. A.6) that are both invertible and volume-preserving for any dt > 0. The structure s = (q,p) on the state-space imposed by the Hamiltonian dynamics can be constraining from the point of view of density estimation. We choose to use the trick proposed in the Hamiltonian Monte Carlo (HMC) literature (; ;), which treats the momentum p as a latent variable (see Fig. 4). This is an elegant solution which avoids having to artificially split the density into two disjoint sets. As a , the data density that our Hamiltonian flows are modelling becomes exclusively parametrised by p(q T), which takes the following form: T (q T,p T))dp T. This integral is intractable, but the Note that, in contrast to VAEs , the ELBO in Eq. 6 is not explicitly in the form of a reconstruction error term plus a KL term. In order to directly compare the performance of HGN to that of its closest baseline, HNN, we generated four datasets analogous to the data used in. The datasets contained observations of the time evolution of four physical systems: mass-spring, pendulum, two-and three-body (see Fig. 5). In order to generate each trajectory, we first randomly sampled an initial state, then produced a 30 step rollout following the ground truth Hamiltonian dynamics, before adding Gaussian noise with standard deviation σ 2 = 0.1 to each phase-space coordinate, and rendering a corresponding 64x64 pixel observation. We generated 50 000 train and 10 000 test trajectories for each dataset. When sampling initial states, we start by first sampling the total energy of the system denoted as a radius r in the phase space, before sampling the initial state (q,p) uniformly on the circle of radius r. Note that our pendulum dataset is more challenging than the one described in , where the pendulum had a fixed radius and was initialized at a maximum angle of 30 • from the central axis. Mass-spring. The dynamics of a frictionless mass-spring system are modeled by the Hamiltonian, where k is the spring constant and m is the mass. We fix k = 2 and m = 0.5, then sample a radius from a uniform distribution r ∼ U(0.1,1.0). Pendulum. The dynamics of a frictionless pendulum are modeled by the Hamiltonian H = 2mgl(1− cos (q))+ p 2 2ml 2, where g is the gravitational constant and l is the length of the pendulum. We fix g = 3, m = 0.5, l = 1, then sample a radius from a uniform distribution r ∼ U(1.3,2.3). Two-and three-body problems. In an n-body problem, particles interact with each other through an attractive force, like gravity. The dynamics are represented by the following Hamiltonian H = n i ||pi|| 2 2mi − 1≤i<j≤n gmimj ||qj −qi||. We set m = 1 and g = 1 for both systems. For the two-body problem, we set r ∼ U(0.5,1.5), and we also change the observation noise to σ 2 = 0.05. For the three-body problem, we set r ∼ U (0.9,1.2), and set the observation noise to σ 2 = 0.2. Learning the Hamiltonian We tested whether HGN and the HNN baseline could learn the dynamics of the four systems described above. To ensure that our re-implementation of HNN was correct, we replicated all the presented in the original paper by verifying that it could learn the dynamics of the mass-spring, pendulum and two-body systems well from the ground truth state, and the dynamics of a restricted pendulum from pixels. We also compared different modifications of HGN: a version trained and tested with an Euler rather than a leapfrog integrator (HGN Euler), a version trained with no additional function between the posterior and the prior (HGN no f ψ) and a deterministic version (HGN determ), which did not include the sampling step from the posterior q φ (z|x 0 ...x T). : the original architecture and a convolutional version closely matched to the architecture of HGN. We also compare four versions of our proposed Hamiltonian Generative Network (HGN): the full version, a version trained and tested with an Euler rather than a leapfrog integrator, a deterministic rather than a generative version, and a version of HGN with no extra network between the posterior and the initial state. Tbl. 1 and Fig. 6 demonstrate that HGN and its modifications learned well on all four datasets. However, when we attempted to train HNN on the four datasets described above, its Hamiltonian often collapsed to 0 and the model failed to reproduce any dynamics, defaulting to a static single image. We were unable to improve on this performance despite our best efforts, including a modification of the architecture to closely match ours (referred to as HNN Conv) (see Sec. A.3 of the appendix for details). Tbl. 1 shows that the average mean squared error (MSE) of the pixel reconstructions on both the train and test data is an order of magnitude better for HGN compared to both versions of HNN. The same holds when visualising the average per-frame MSE of a single train and test rollout for each dataset shown in Fig. 6. Note that the different versions of HGN have different trade-offs. The deterministic version produces more accurate reconstructions but it does not allow sampling. This effect is equivalent to a similar distinction between autoencoders and VAEs. Using the simpler Euler integrator rather than the more involved leapfrog one might be conceptually more appealing, however it does not provide the same energy conservation and reversibility properties as the leapfrog integrator, as evidenced by the increase by an order of magnitude of the variance of the learned Hamiltonian throughout a sequence rollout as shown in Tbl. 2. The full version of HGN, on the other hand, is capable of reproducing the dynamics well, is capable of producing diverse yet plausible rollout samples (Fig. 8) and its rollouts can be reversed in time, sped up or slowed down by either changing the value or the sign of dt used in the integrator (Fig. 7). Expressive density modelling using learned Hamiltonian flows We evaluate whether NHF is capable of expressive density modelling by stacking learned Hamiltonians into a series of normalising flows. Fig. 9 demonstrates that NHF can transform a simple soft-uniform prior distribution π(s 0 ;σ,β) into significantly more complex densities with arbitrary numbers of modes. The soft-uniform density, 2 )), where f is the sigmoid function and β is a constant, was chosen to make it easier to visualise the learned attractors. The model also performed well with other priors, including a Normal distribution. It is interesting to note that the trained model is very interpretable. When decomposed into the equivalents of the kinetic and potential energies, it can be MODEL MASS-SPRING PENDULUM TWO-BODY THREE-BODY TRAIN TEST TRAIN TEST TRAIN TEST TRAIN Figure 7: Example of a train and a test sequence from the dataset of a three-body system, its inferred forward, backward, double speed and half speed rollouts in time from HGN, and a forward rollout from HNN. HNN did not learn the dynamics of the system and instead learned to reconstruct an average image. Mass-spring seen that the learned potential energy V (q) learned to have several local minima, one for each mode of the data. As a consequence, the trajectory of the initial samples through the flow has attractors at the modes of the data. We have also compared the performance of NHF to that of the RNVP baseline . Fig. 10 shows that the two approaches are comparable in their performance, but NHF is more computationally efficient as discussed at the end of Sec. 3.3. The RNVPs use alternating masks with two layers (red) or 3 layers (purple). NHF uses 1, 2 or 3 leapfrog steps (blue, yellow and green respectively). We have presented HGN, the first deep learning approach capable of reliably learning Hamiltonian dynamics from pixel observations. We have evaluated our approach on four classical physical systems and demonstrated that it outperformed the only relevant baseline by a large margin. Hamiltonian dynamics have a number of useful properties that can be exploited more widely by the machine learning community. For example, the Hamiltonian induces a smooth manifold and a vector field in the abstract phase space along which certain physical quantities are conserved. The time evolution along these paths is also completely reversible. These properties can have wide implications in such areas of machine learning as reinforcement learning, representation learning and generative modelling. We have demonstrated the first step towards applying the learnt Hamiltonian dynamics as normalising flows for expressive yet computationally efficient density modelling. We hope that this work serves as the first step towards a wider adoption of the rich body of physics literature around the Hamiltonian principles in the machine learning community. A SUPPLEMENTARY MATERIALS The Hamiltonian Generative Network (HGN) consists of three major parts, an encoder, the Hamiltonian transition network and a decoder. During training the encoder starts with a sequence of raw training images and encodes it into a probabilistic prior representation transformed with an additional network on top into a start state consisting of a downsized spatial representation in latent space (4×4), where each abstract pixel is the the concatenation of abstract position (q) and momentum (p) (each of dimension 16). The encoder network is a convolutional neural network with 8 layers, with 32 filters on the first layer, then 64 filters on each subsequent layer, while in the last layer we have 48 filters. The final encoder transformer network is a convolutional neural network with 3 layers and 64 filters on each layer. Starting from this initial embedded state, the Hamiltonian transition network generates subsequent states using a symplectic integrator approximating the Hamiltonian equations. The Hamiltonian transition network represents the Hamiltonian function as a function from the abstract position and momentum space to the real numbers at any time step t. The Hamiltonian transition network is a convolutional neural network of 6 layers, each consisting of 64 filters. The discrete timestep we use for the symplectic integrator update step is dt = 0.125. At each time step t the decoder network d θ takes only the abstract position part q t of the state s t and decodes it back to an output image x t ∈ R 32×32×3 of the same shape as the input images. The decoder network is a progressive network consisting of 3 residual blocks, where each residual block resizes the current input image by a factor of 2 using the nearest neighbor method (at the end we have to upscale our latent spatial dimension of 4 to the desired output image dimension of 32 in these steps), followed by 2 blocks of a one layer convolutional neural network with 64 filters and a leaky ReLU activation function, closing by a sigmoid activation in each block. After the 3 blocks a final one layer convolutional neural network outputs the output image with the right number of channels. We use Adam optimisier with learning rate 1.5e-4. When optimising the loss, in practice we do not learn the variance of the decoder p θ (x|s) and fix it to 1, which makes the reconstruction objective equivalent to a scaled L2 loss. Furthermore, we introduce a Lagrange multiplier in front of the KL term and optimise it using the same method as in. For all NHF experiments the Hamiltonian was of the form H(q, p) = K(p) + V (q). The kinetic energy term K and the potential energy term V are soft-plus MLPs with layer-sizes [d,128,128,1] where d is the dimension of the data. Soft-plus non-linearities were chosen because the optimisation of Hamiltonian flows involves second-order derivatives of the MLPs used for parametrising the Hamiltonians. This makes ReLU non-linearities unsuitable. The encoder network was parametrized as f ψ (p|q) = N (p;µ(q),σ(q)), where µ and σ are ReLU MLPs with size [d,128,128,d]. The Hamiltonian flow H dt, was approximated using a leapfrog integrator since it preserves volume and is invertible for any dt (see also section A.6). We found that only two leapfrog steps where sufficient for all our examples. Parameters were optimised using Adam (learning rate 3e-4) and Lagrange multipliers were optimised using the same method as in. All shown kernel density estimate (KDE) plots used 1000 samples and isotropic Gaussian kernel bandwidth of 0.3. For the RNVP baseline we used alternating masks with two or three layers. Each RNVP layer used an affine coupling parametrized by a two-layer relu-MLP that matched those used in the leapfrog. The Hamiltonian Neural Network (HNN) learns a differentiable function H(q,p) that maps a system's state in phase space (its position q and momentum p) to a scalar quantity interpreted as the system's Hamiltonian. This model is trained so that H(q,p) satisfies the Hamiltonian equation by minimizing where the derivatives ∂H ∂q and ∂H ∂p are computed by backpropagation. In the original paper, these targets are either assumed to be known or are estimated by finite differences using the state at times t and t+1. Accordingly, in the majority of the experiments presented by the authors, the Hamiltonian was learned from the ground truth state space directly, rather than from pixel observations. The original HNN model is trained in a supervised fashion on the ground truth state of a physical system and its time derivatives. As such, it is not directly comparable to our method, which learns a Hamiltonian directly from pixels. Instead, we compare to the PixelHNN variant of the HNN, which is introduced in the same paper, and which is able to learn a Hamiltonian from images and in the absence of true state or time derivative in some settings. This required a modification of the model. First, the input to the model became a concatenated, flattened pair of images X t = (x t, x t+1), which was then mapped to a low-dimensional embedding space z t = (q t,p t) using an encoder neural network. Note that the dimensionality of this embedding (z ∈ R 2 in the case of the pendulum system presented in the paper) is chosen to perfectly match the ground truth dimensionality of the phase space, which was assumed to be known a priori. This, however, is not always possible, as when a system has not yet been identified. The latent embedding was then treated as an estimate of the position and the momentum of the system depicted in the images, where the momentum was assumed to be equal to the velocity of the system -an assumption enforced by the additional constraint found necessary to encourage learning, which encouraged the time derivative of the position latent to equal the momentum latent using finite differences on the split latents: This loss is motivated by the observation that in simple Newtonian systems with unit mass, the system's state is fully described by the position and its time derivative (the system's velocity). An image embedding that corresponds to the position and velocity of the system will minimize this loss. This assumption is appropriate in the case of the simple pendulum system presented in the paper, however it does not hold more generally. As mentioned earlier, PixelHNN takes as input a concatenated, flattened pair of images and maps them to an embedding space z t = (q t,p t), which is treated as an estimate of the position and momentum of the system depicted in the images. Note that X t always consists of two images in order to make the momentum observable. This embedding space is used as the input to an HNN, which is trained to learn the Hamiltonian of the system as before, but using the embedding instead of the true system state. To enable stable learning in this configuration, the PixelHNN uses a standard mean-squared error autoencoding loss: whereX t is the autoencoder output and X i t is the value of pixel i of N total pixels in X t. 2 This loss encourages the network embedding to reflect the content of the input images and to avoid the trivial solution to. The full PixelHNN loss is: where L HNN is computed using the finite difference estimate of the time derivative of the embedding. λ HNN is a Lagrange multiplier, which is set to 0.1, as in the original paper. L WD is a standard L2 weight decay and its Lagrange multiplier λ WD is set to 1e-5, as in the original paper. In the experiments presented here, we reimplemented the PixelHNN architecture as described in and trained it using the full loss. As in the original paper, we used a PixelHNN with HNN, encoder, and decoder subnetworks each parameterized by a multi-layer perceptron (MLP). The encoder and decoder MLPs use ReLU nonlinearities. Each consists of 4 layers, with 200 units in each hidden layer and an embedding of the same size as the true position and momentum of the system depicted (2 for mass-spring and pendulum, 8 for two-body, and 12 for 3-body). The HNN MLP uses tanh nonlinearities and consists of two hidden layers with 200 units and a one-dimensional output. To ensure the difference in performance between the PixelHNN and HGN are not due primarily to archiectural choices, we also compare to a variant of the PixelHNN architecture using the same convolutional encoder and decoder as used in HGN. We used identical hyperparameters to those described in section A.1. We map between the convolutional latent space used by the encoder and decoder and the vector-valued latent required by the HNN using one additional linear layer for the encoder and decoder. In the original paper, the PixelHNN model is trained using full-batch gradient descent. To make it more comparable to our approach, we train it here using stochastic gradient descent using minibatches of size 64 and around 15000 training steps. As in the original paper, we train the model using the Adam optimizer and a learning rate of 1e-3. As in the original paper, we produce rollouts of the model using a Runge-Kutta integrator (RK4). See Section A.6 for a description of RK4. Note that, as in the original paper, we use the more sophisticated algorithm implemented in scipy (scipy.integrate.solve_ivp) . The datasets for the experiments described in 4 were generated in a similar manner to for comparative purposes. All of the datasets simulate the exact Hamiltonian dynamics of the underlying differential equation using the default scipy initial value problem solver. After creating a dataset of trajectories for each system, we render those into a sequence of images. The system depicted in each dataset can be visualized by rendering circular objects: • For the mass-spring the mass object is rendered as a circle and the spring and pivot are invisible. • For the pendulum only the weight (the bob) is rendered as a circle, while the massless rod and pivot are invisible. • For the two and three body problem we render each point mass as a circle in a different color. Additionally, we smooth out the circles such that they do not have hard edges, as can be seen in Fig. 7. In order to obtain the presented in Tbl. 1 we trained both HGN and HNN for 15000 iterations, with batch size of 16 for HGN, and 64 for the HNN. Given the dataset sizes, this means that HGN was trained for around 5 epochs and HNN was trained for around 19 epochs, which took around 16 hours. Figs. 11-12 plot the convergence rates for HGN (leapfrog) and HNN (conv) on the four datasets. Mass-spring Two-Body Three-Body Throughout this paper, we estimate the future state of systems from inferred values of the system position and momentum by numerically integrated the Hamiltonian. We explore three methods of numerical integration: (i) Euler integration, (ii) Runge-Kutta integration and (iii) leapfrog integration. Euler integration estimates the value of a function at time t+dt by incrementing the function's value with the value accumulated by the function's derivative, assuming it stays constant in the interval [t,t+dt]. In the Hamiltonian framework, Euler integration takes the form: Because Euler integration estimates a function's future value by extrapolating along its first derivative, the method ignores the contribution of higher-order derivatives to the function's change in time. Accordingly, while Euler integration can reasonably estimate a function's value over short periods, its errors accumulate rapidly as it is integrated over longer periods or when it is applied multiple times. This limitation motivates the use of methods that are stable over more steps and longer integration times. One such method is four-step Runge-Kutta integration (RK4), the most widely used member of the Runge-Kutta family of integrators. Whereas Euler integration estimates the value of a function at time t+dt using only the function's derivative at time t, RK4 accumulates multiple estimates of the function's value in the interval [t,t + dt]. This integral more correctly reflects the behavior of the function in the interval, ing in a more stable estimate of the function's value. RK4 estimates the state at time t+dt as: Non-symplectic Symplectic q p dt A B Figure 13: A: example of using a symplectic (leapfrog) and a non-symplectic (Euler) integrators on the Hamiltonian of a harmonic oscillator. The blue quadrilaterals depict a volume in phase space over the course of integration. While the symplectic integrator conserves the volume of this region, but the non-symplectic integrator causes it to increase in volume with each integration step. The symplectic integrator clearly introduces less divergence in the phase space than the non-symplectic alternative over the same integration window. B: an illustration of the leapfrog updates in the phase space, where q is position and p is momentum. If we assume that the Hamiltonian equations take this form, we can integrate them using the leapfrog integrator, which in essence updates the position and momentum variables at interleaved time points in a way that resembles the updates "leapfrogging" over each other (see Fig. 13B for an illustration). In particular, the following updates can be applied to a Hamiltonian of the form H = V (q) + T (p), where V is the potential energy and T is the kinetic energy of the system: q t+dt = q t +dtp t+ dt 2. As discussed above, leapfrog integration is more stable and accurate over long rollouts than integrators like Euler or RK4. This is because the leapfrog integrator is an example of a symplectic integrator, which means it is guaranteed to preserve the special form of the Hamiltonian even after repeated application. An example visual comparison between a symplectic (leapfrog) and non-symplectic (Euler) integrator applied over the Hamiltonian for a harmonic oscilator is shown in Fig. 13A. For a more thorough discussion of the properties of leapfrog integration, see . The Hamiltonian Flow consists of two components. Firstly, it defines a density model over the joint space s T = (q T,p T) using the Hamiltonian Flow as described in section 3.3. However, we assume that the observable variable represents only q T and treat p T as a latent variable which we have to marginalize over. Since the integral is intractable using the introduced variational distribution we can derive the lower bound on the marginal likelihood:
We introduce a class of generative models that reliably learn Hamiltonian dynamics from high-dimensional observations. The learnt Hamiltonian can be applied to sequence modeling or as a normalising flow.
913
scitldr
Cloud Migration transforms customer’s data, application and services from original IT platform to one or more cloud en- vironment, with the goal of improving the performance of the IT system while reducing the IT management cost. The enterprise level Cloud Migration projects are generally com- plex, involves dynamically planning and replanning various types of transformations for up to 10k endpoints. Currently the planning and replanning in Cloud Migration are generally done manually or semi-manually with heavy dependency on the migration expert’s domain knowledge, which takes days to even weeks for each round of planning or replanning. As a , automated planning engine that is capable of gener- ating high quality migration plan in a short time is particu- larly desirable for the migration industry. In this short paper, we briefly introduce the advantages of using AI planning in Cloud Migration, a preliminary prototype, as well as the challenges the requires attention from the planning and scheduling society. Automated planning and AI planning have been investigated extensively by researchers and successfully applied in many areas for decades, for example, health care BID0, semiconductor manufacturing BID1, and aviation BID2, to name a few. Meanwhile, attracted by the promise of the scalability, flexibility and potentially lower cost of the resources, more and more enterprises are considering moving their IT infrastructure and applications to Cloud or Hybrid Cloud service platforms, which is called Cloud Migration in general (Armbrust et. al. 2010, Khajeh-Hosseini, Greenwood, and Sommerville 2010). Noticing that the discussions of using AI planning in the Cloud Migration are limited both in academia and in industry, in this short paper we identify the advantages and challenges of applying AI planning to Cloud Migration by (i) introducing Cloud Migration and its planning problem; (ii) demonstrate problem feasibility by showing a prototype AI planning model; and (iii) discuss the limits of current model and future research. Cloud Migration transforms customer's data, application and services from original IT platform, hosted on servers hosted in-house or cloud environment, to one or more cloud environment, with the goal of improving the performance of the IT system while reducing the IT management cost. Generally speaking, enterprise-level Cloud Migration is a complex and usually long running process that requires careful planning. Cloud Migration includes four major steps: Discovery, Planning, Execute and Validation BID6. In the Discovery stage, migration experts investigate the current IT system, collect data and identify customer's migration goals. Then migration experts allocate resources and schedule the executions activities, refer to as the Planning stage. Next the migration are executed as planned, which is called Execute stage. In the last Valication stage, all the applications are tested that they are running as expected on the new cloud environment. Due to the complexity of the IT system and IT infrastructure, there may not be clear boundaries between the major steps. It could happen that in the Planning stage some data inconsistency were observed and additional discoveries are performed and the migration execution needed to be re-scheduled or re-planned. Due to the complexity of migration projects, low tolerance on errors and its heavy dependence on the migration expert's domain knowledge, current practitioners mostly perform migration planning either manually. or using tools manually created runbooks (Transition Manager, Velostrata). For example, in Transition Manager, the user has to upload manually scripts, e.g. groovy scripts (The Apache Groovy Programming Language), and ask the tool to generate runbook for existing wave bundles. Meanwhile, Velostrata Manager would create a.csv format template for the user to manually put tasks in and create the runbook (Creating and modifying runbooks). These planning and replanning approaches rely heavily on the practitioner's previous migration experience and domain knowledge, hence is not scalable. With the fast evolution of computing speed and machine learning technologies, domain-independent AI planners are becoming more and more powerful (Ghallab, Nau, In a Cloud Migration planning problem, there are N assets to be migrated. Assets may communicate with each other, for example, an application reads/writes a database, hence causes dependencies between assets and enforces precedence constraints for migration tasks. For instance, if asset A depends on asset B, the migration of asset A has to be done before asset B's migration. The goal of migration planning is to allocate resources and create sequence of tasks to be executed. In the case of enterprise level migration, the execution should be performed in a limited time window to minimize potential business disruption. From application point of view, the main step in developing an AI planner based on domain-independent planner is to formulate the planning problem in Cloud Migration as a planning problem for the planner. In one of our prototype AI planner, a simplest scenario, in which only migration of physical servers and virtual machines are considered, is modeled as Domain file and Problem file using Planning Domain Definition Language (PDDL). The objects in the domain file are server and wave. Each server has is assigned a numeric value called'effort hours', which represents the cost of migrating this server. Each wave is assigned a numeric value called'effort hour limit', which enforces a capacity constraint for the number of servers to be migrated in each wave. The goal is to migrate all the servers without violating the capacity constraint. A planner supported by Metric-FF planner is developed to test the performance and a graphical UI is created for users to upload spreadsheet containing server's information BID7. In particular, translation engines are developed to generate the Domain.pddl file and Problem.pddl file automatically. When there are limited number of servers, the planner finds solution in a few seconds. However, when tested with 500 servers, the planner did not find any solution in 2 hours. Figure 1 shows an overview of the prototype planner. In , in this short paper, the Cloud Migration process is investigated and a prospective research direction is identified around using domain-independent AI planner in Cloud Migration Planning. Automate migration planning is desirable for practitioners from both the cost perspective and the quality consideration. It also brings in new research topics. Some of them are listed as following.• Optimize the modeling of migration planning problem so that the domain and problem file can be generated faster, shorten the auto-planning time.• Noticing that many top-performed planners in ICP does not support metric feature, efficient algorithms that removes the metric requirements in the resource planning part of a migration planning problem needs to be developed.• Improve planner's computational speed or develop algorithms so it can generate migration plan for thousands assets and more complicated migration scenarios.
In this short paper, we briefly introduce the advantages of using AI planning in Cloud Migration, a preliminary prototype, as well as the chal- lenges the requires attention from the planning and schedul- ing society.
914
scitldr
Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn a domain-invariant representation for both domains. In this work, we study, theoretically and empirically, the explicit effect of the embedding on generalization to the target domain. In particular, the complexity of the class of embeddings affects an upper bound on the target domain's risk. This is reflected in our experiments, too. Domain adaptation is critical in many applications where collecting large-scale supervised data is prohibitively expensive or intractable, or conditions at prediction time can change. For instance, self-driving cars must be robust to various conditions such as different weather, change of landscape and traffic. In such cases, the model learned from limited source data should ideally generalize to different target domains. Specifically, unsupervised domain adaptation aims to transfer knowledge learned from a labeled source domain to similar but completely unlabeled target domains. One popular approach to unsupervised domain adaptation is to learn domain-invariant representations BID7 BID5, by minimizing a divergence between the representations of source and target domains. The prediction function is learned on the latent space, with the aim of making it domain-independent. A series of theoretical works justifies this idea BID9 BID1 BID3.Despite the empirical success of domain-invariant representations, exactly matching the representations of source and target distribution can sometimes fail to achieve domain adaptation. For example, BID13 show that exact matching may increase target error if label distributions are different between source and target domain, and propose a new divergence metric to overcome this limitation. BID14 establish lower and upper bounds on the risk when label distributions between source and target domains differ. BID6 point out the information lost in non-invertible embeddings, and propose different generalization bounds based on the overlap of the supports of source and target distribution. In contrast to previous analyses that focus on changes in the label distributions or on joint support, we here study the effect of the complexity of the joint representation. In particular, we show a general bound on the target risk that reflects a tradeoff between the embedding complexity and the divergence of source and target in the latent representation space. In particular, a too powerful class of embedding functions can in overfitting the source data and the distribution matching, leading to arbitrarily high target risk. Hence, a restriction (taking into account assumptions about correspondences and invariances) is needed. Our experiments reflect these trends empirically, too. For simplicity, we consider binary classification with input space X ✓ R n and output space Y = {0, 1}. Define H to be the hypothesis class from X to Y. The learning algorithm obtains two datasets: labeled source data X S with distribution p S, and unlabeled target data X T with distribution p T. We will use p S and p T to denote the joint distribution on data and labels X, Y and the marginals, i.e., p S (X) and p S (Y). Unsupervised domain adaptation seeks a hypothesis h 2 H that minimizes the risk in the target domain measured by a loss function`(here, zero-one loss): DISPLAYFORM0 We will not assume common support in source and target domain, in line with standard benchmarks for domain adaptation such as adapting from MNIST to M-MNIST. A common approach to domain adaptation is to learn a joint embedding of source and target data BID5 BID12. The idea is that aligning source and target distributions in this latent space Z in a domaininvariant representation, and hence a subsequent classifier f from the embedding to Y will generalize from source to target. Formally, this in the following objective function on the hypothesis h = fg:= f g, where G is the class of embedding functions to Z, and we minimize a divergence d between the distributions p S (Z g) = p S (g(X)), p T (Z g) of source and target after mapping to Z: DISPLAYFORM0 The divergence d could be, e.g., the Jensen-Shannon BID5 or Wasserstein distance BID11. introduced the H H-divergence to bound the worst-case loss from extrapolating between ] be the expected disagreement between two hypotheses, then the H H-divergence is defined as follows. Definition 1. (H H-divergence) Given two domain distributions p S and p T over X, and a hypothesis class H, the DISPLAYFORM0 This divergence allows to bound the risk on the target domain: Theorem 1. For all hypotheses h 2 H, the target risk is bounded as DISPLAYFORM1 where H is the best joint risk DISPLAYFORM2 Similar have been obtained for continuous labels BID3 BID9 ).Theorem 1 is an influential theoretical in unsupervised domain adaptation, and motivated work on domain invariant representations. For example, recent work BID5; BID6 ) applied Theorem 1 to the hypothesis space F that maps the representation space Z induced by an encoder g to the output space: DISPLAYFORM3 where F (g) is the best hypothesis risk with fixed g, i.e., DISPLAYFORM4 The F F divergence implicitly depends on the fixed g and can be small if g provides a suitable representation. However, if g induces a wrong alignment, then the best hypothesis risk F (g) is large with any function class F. The following example will illustrate such a situation, motivating to explicitly take the class of embeddings into account when bounding the target risk. We begin with an illustrative toy example. FIG1 shows a binary classification problem in 2D with disjoint support and a slight shift in the label distributions from source to target: p S (y = 1) = p T (y = 1) + 2✏. Assume the representation space is one dimensional, so the embedding g is a function from 2D to 1D. If we allow arbitrary, nonlinear embeddings, then, for instance, the embedding shown in FIG1 (b), together with an optimal predictor, achieves zero source loss and a zero divergence, and is hence optimal according to the objective. However, the target risk of this combination of embedding and predictor is maximal: R T (fg) = 1.If we restrict the class G of embeddings to linear maps g(x) = Wx where W 2 IR 1⇥2, then the embeddings that are optimal with respect to the objective are of the form W = ⇥ a, 0 ⇤. Together with an optimal source classifier f, they achieve a non-zero value of 2✏ for objective due to the shift in class distributions. However, these embeddings retain label correspondences, and can lead to a zero target risk. This example illustrates that a too rich class of embeddings can "overfit" the alignment, and hence lead to arbitrarily bad solutions. Hence, the complexity of the encoder class plays an important role in learning domain invariant representation too. Motivated by the above example, we next expose how the bound on the target risk depends on the complexity of the embedding class. To do so, we apply Theorem 1 to the hypothesis h = fg: DISPLAYFORM0 Comparing the bound to the previous bound, we notice two differences: the best in-class joint risk now minimizes over both F and G, i.e., DISPLAYFORM1 which is smaller than Fg and reflects the fact that we are learning both f and g. In return, the divergence term d FG FG (p S, p T) becomes larger than the one in bound.To better understand these tradeoffs, we derive a more interpetable form of the bound on the target risk. Before presenting the bound, we define an extended version of DISPLAYFORM2 For two domain distributions p S and p T over X, an encoder class G, and predictor class F, the F G G -divergence between p S and p T is DISPLAYFORM3 Note that the F G G -divergence is strictly smaller than the FG FG-divergence, since the two hypotheses in the supremum, fg and fg 0, share the same predictor f. We are ready to state the following . Theorem 2. For all f 2 F and g 2 G, DISPLAYFORM4 where FG (g) is the best in-class joint risk defined as DISPLAYFORM5 A detailed proof of the theorem may be found in the Appendix. The first term of the bound is the source risk. The second term i is the F F-divergence between the distributions p S (Z g) and p T (Z g) in the representation space; this also appears in the previous bound. The first term in ii measure the F G G -divergence between source and target distribution, which may decrease as the complexity of the encoder decreases. However, a less complex encoder class G can lead to increasing the best hypothesis risk FG (g). Therefore, ii makes a trade-off explicit between the divergence and the model complexity. Note that, as opposed to different FG, FG (g) also measures the correctness of the encoder in the source domain. If the encoder fails to provide informative representations in the source domain, then first term in FG (g) can be large. The last two terms in Theorem 1 express a similar complexity trade-off as ii, but this time with repect to the hypothesis class H, which here combines the encoder and predictor. Hence, directly applying Theorem 1 to the composition (Equation) treats both jointly and does not make the role of the embedding as explicit as Theorem 2. For example, Theorem 2 shows that we can also make the bound tighter by minimizing the divergence between the corresponding distributions in the embedding space, as long as the encoder provides useful representations in the source domain. If i is sufficiently small, the FG FG-divergence reduces to the F G G -divergence, which is strictly smaller than the FG FG-divergence. Comparing to the previous bound in Equation FORMULA6, which assumes a fixed g, we do not assume a known encoder and instead quantify the effect of the encoder family. Moreover, the term F (g) in bound involves the source and target risk, whereas in FG (g) the encoder g only affects the source risk, which can be estimated empirically. Importantly, without restricting the complexity of the encoder or embedding, the F G G -divergence can be large, indicating that the target risk may be large too. This suggests that restricting the model complexity of the embedding is crucial for domain invariant representation learning. To reduce the worst case divergence i, we need to restrict the encoder family to those that can approximately minimize i, in coordination with the predictor class F. Practically, we can optimize the original objective of domain invariant representations in Equation 2 to align the latent distributions. Term ii implies that we should choose the minimal complexity encoder class G that is is still expressive enough to encode the data from both domains. Practically, this can be done by regularizing the encoder, e.g., restricting Lipschitz constants or norms of weight matrices. More explicitly, one may limit the number of layers of a neural network, or apply inductive biases via selecting network architectures. For instance, comparing to fully connected networks (FCs), convolutional neural networks (CNNs) restrict the output to be spatially consistent with respect to the input. Next, we empirically test Theorem 2 via one example of domain-invariant representations: Domain-Adversarial Neural Networks (DANN) BID5, which measure the latent divergence via a domain discriminator (JensenShannon divergence). We use the standard benchmark MNIST! MNIST-M (Ganin & Lempitsky FORMULA0, where the task is to classify unlabeled handwritten digits overlayed with random photographs (MNIST-M) based on labeled images of digits alone (MNIST). We consider two categories of complexity: number of layers and inductive bias (CNN). To analyze the effect of the encoder's complexity, we augment the original two-layer CNN encoders with 1 to 5 additional CNN layers, leaving other settings unchanged. We retrain each model for 5 times and plot the mean and standard deviation of target error with respect to the number of layers in FIG2 (a): Initially, the target error decreases, and then increases when more layers are added. This corroborates our theory: the CNN encoder without additional layers does not have enough expressive power. As a consequence, the best hypothesis risk term FG is larger. However, when more layers are added, the complexity increases and subsequently makes the disagreements larger. To investigate the importance of inductive bias in domain invariant representations, we replace the CNN encoder with an MLP encoder. The experimental are shown in 2(b). Comparing the target error between (a) and (b) in FIG2, we can see that the target error with an MLP encoder is significantly higher than with a CNN encoder. Comparing to CNNs, which encode invariance via pooling and learned filters, MLPs do not have any inductive bias and lead to worse performance. In fact, the target error with MLP-based domain adaptation is higher than just training on the source, suggesting that, without an appropriate inductive bias, learning domain invariant representations can even worsen the performance. To gain deeper insight, we use t-SNE BID8 to visualize source and target embedding distributions in FIG2 (c),(d). With the inductive bias of CNNs, the representations of the target domain aligns well with those of source domain. In contrast, the MLP encoder in a strong label mismatch. The experiments show that the complexity of the encoder can have a direct effect on the target error. A more complex encoder class leads to larger theoretical bound on the target error, and, indeed, aligned with the theory, we see a significant performance drop in target domain. Moreover, the experiments suggest that inductive bias is important too. With a suitable inductive bias such as CNNs, DANN achieves higher performance than the with the MLP encoder, even if the CNN encoder has twice the number of layers. CNNs are standard for many vision tasks, such as digit recognition. However, explicit supervision may be required to identify the encoder class when we have less prior knowledge about the task BID10 BID2. In this work, we study the role of embedding complexity for domain-invariant representations. We theoretically and empirically show that restricting the encoder is necessary for successful adaptation, a fact that has mostly been overlooked by previous work. In fact, without carefully selecting the encoder class, learning domain invariant representations might even harm the performance. Our observations motivate future research on identifying eappropriate encoder classes for various tasks. Theorem 2. For all f 2 F and g 2 G, DISPLAYFORM0 where FG (g) is the best in-class joint risk defined as DISPLAYFORM1 Proof. We first define the optimal composition hypothesis f? g? with respect to an encoder g to be the hypothesis which minimizes the following error DISPLAYFORM2 By the triangle inequality for classification error ), DISPLAYFORM3 The second term in the R.H.S of Eq. 4 can be bounded as DISPLAYFORM4 The third term in the R.H.S of Eq. 4 can be bounded as DISPLAYFORM5 Combine the above bounds, we have DISPLAYFORM6 where DISPLAYFORM7
A general upper bound on the target domain's risk that reflects the role of embedding-complexity.
915
scitldr
The domain of time-series forecasting has been extensively studied because it is of fundamental importance in many real-life applications. Weather prediction, traffic flow forecasting or sales are compelling examples of sequential phenomena. Predictive models generally make use of the relations between past and future values. However, in the case of stationary time-series, observed values also drastically depend on a number of exogenous features that can be used to improve forecasting quality. In this work, we propose a change of paradigm which consists in learning such features in embeddings vectors within recurrent neural networks. We apply our framework to forecast smart cards tap-in logs in the Parisian subway network. Results show that context-embedded models perform quantitatively better in one-step ahead and multi-step ahead forecasting. Classical statistical forecasting methods rely on the existence of temporal correlation between past and future values. In particular, the auto-regressive component of ARIMA estimators BID0 ) models the relation between past and future as a linear regression. In the deep learning paradigm, Recurrent Neural Networks have long been used to tackle sequential problems. Increasingly complex models such as ConvLSTM BID13 ) or Graph Neural Networks ) are developed to model multivariate phenomena and allow a precise modeling of the temporal dynamics. However, exogenous factors can greatly influence the observed values and are not taken into account by the mentioned models. For example, the type of road can drastically change traffic flow predictions, the period of the year will determine the values of sales time-series, and so on. In this work, we refer to these features as contextual information, or context. Such context is naturally used when dealing with stationary time-series to construct baselines based on the average of past values given a context. NARX models and their neural networks variations also make use of context by inputting it jointly with previous values of the forecast variable BID17 ).Similar to how Graph NN learn relations between nodes, we propose for multivariate stationary timeseries to learn context within a recurrent architecture and we introduce context-embedded RNN. For each contextual feature, we concatenate to the observed value an embedding that is to be learned jointly with the weights of the network. We do not deal with the case of continuous features but these could be transformed into categories. We tested our framework on public transportation tap-in logs one-step ahead and multi-step ahead forecasting, where we consider spatial context in the form of subway stations and temporal context through day of the week and time of the day. To the best of our knowledge, there exists no good-quality public dataset containing subway logs at a satisfying granularity. We realized experiments on data provided by Ile-de-France Mobilités 1 (Parisian region public transportation agency) but we expect that the fast development of data collection in this domain will entail the availability of public datasets in a near future. On the other hand, all of the source code used to realize the experiments is available on https://github.com/XXXX.Results of the experiments show that contextual models consistently outperform other recurrent models as well as the historical average baseline which is especially strong in the case of stationary 1 https://www.iledefrance-mobilites.fr/ time-series. Contextual models perform particularly well for long-term forecasting. In summary, in this paper we propose a new paradigm for learning contextual information within RNNs, which quantitatively improves forecasting performance by allowing a fine-grained modeling of local dynamics. The remainder of this paper is organized as follows: in time-series forecasting and use of context is presented in Section 2; proposed models are introduced in Section 3 and are tested in prediction experiments in Section 4. Time-series forecasting When it comes to time-series forecasting, the classical methods rely on ARMA models BID0 ) and their variants ARIMA for non-stationary series or SARIMA in the case of seasonality. However, RNNs have long been used for this task BID3 ) and perform well on a variety of applications BID20 ). They are now employed to model more complex data. For instance, spatio-temporal data, which are similar to the application studied in this work, can be dealt with using a combination of CNN and RNN as in BID13. More generally, it is viewed as a graph problem in many works;; BID4; BID21 ). In particular, applied to traffic forecasting, BID4 learn weighted convolutional features representing the relations between each node of the graph. These features are then processed by a LSTM. While we could deal with the use case of transportation logs forecasting with such Graph NN, we choose to develop a more general framework where we learn peculiarities of each location instead of the relations between them. Contextual information Jointly with complex architectures, contextual features can be used to improve forecasting performance. In an early work, BID15 develop the KARIMA algorithm. It uses a Kohonen neural network to cluster data based on present and past observations, but also time-step and day of the week. Then an ARIMA is used to predict the next value. More recently BID18 and BID5 use additional temporal features in LSTM and gradient boosting decision trees respectively. In general, predictive models with exogenous features belong to the class of NARX models such as BID6 which forecast groundwater level based on precipitation or BID8 where building heat load depends on many features. A different method is adopted by BID10 in the prediction of the next location problem. They replace the weight matrix multiplied by the input of a RNN by transition matrices representing spatial and temporal information. In this work, we choose to let the neural network learn its representation of the contextual features. Public transportation data We apply our models on public transportation data, a domain which has not been as extensively studied as traffic forecasting because of the late apparition of data. BID19 combine SVM and Kalman filters to predict bus arrival times while BID1 only consider historical average for tap-in and tap-out forecasting. Closer to our work, BID14 use LSTM networks for tap-in data in the Parisian area. However, they do not use context in the proposed models, whether spatial context because they study a small business zone, or temporal context. We describe notations for the considered transportation problem but the developed ideas can be extended to other context-dependent forecasting problems. A particularity of the data is the discontinuity caused by the closure of the subway network every night (as mentioned in BID1 ; BID12). Therefore the observations for each day form a multivariate time-series containing the number of passengers entering the transportation network. Data is processed in the form of a 3D tensor X ∈ R N ×S×T, with N the number of days, S the number subway stations and T the number of time-steps. In particular, for a station s, X s = X:,s,: ∈ R N ×T contains all the values for a specific location. We also denote x s = X d,s,: ∈ R T the vector of values for a day d and station s and x t = X d,:,t ∈ R S the values for a day d at time t. In the recurrent models, the hidden state at time t of size h will be noted h t ∈ R h, or h s t when it represents a single location s. We will also introduce embeddings vectors for spatial location z s, day of the week z d and time-step z t whose sizes are respectively λ s, λ d and λ t. Recurrent neural networks are a natural choice for forecasting time-series as they encode a latent state for each time step which is used to predict the next value. These architectures can model the dynamics of time-series and have a memory allowing the use of several observations in the past. In particular, they may be able to adapt themselves to anomalous behaviors, making them more robust. We propose three recurrent architectures, introducing three different ways of dealing with spatial context. Two of them model it implicitly while the third one explicitly learns it. Each architecture is composed of a recurrent encoder E transforming the observations into hidden latent states. These states are then decoded into predictions using a linear layer D. Each of the models can then be completed with temporal context. Univariate RNN First of all, we consider each station separately. That is, we explicitly train S distinct RNNs over as many matrices of samples X s ∈ R N ×T. In this case the input dimension of each RNN is 1, i.e. we compute p(x for each t which is decoded into the prediction by D s . DISPLAYFORM0 Multivariate RNN In this model we consider that each sample represents a single day over the whole network and is a multi-dimensional series X d ∈ R S×T . This representation assumes a correlation between the values of the different stations at each time t. In this setting we compute p(x t+1 |x t, ..., x 0). This is similar to spatio-temporal models, but here the relations are not specified and the network must discover them during training. At time t the vector sample x t ∈ R S represents the observed state of the subway network, which is combined with the current hidden state h t ∈ R h by the recurrent encoder to compute the next hidden state. During this stage, the recurrent encoder E has used several layers to combine past and current information into a synthetic state which is decoded back to S predictions by D (see Equation 2 and Figure 2). At the end of the day, this architecture captures the dynamics and the correlation of the entire network. Spatial context is not explicitly specified but included in the weights of the network. This second model offers a natural way to deal with multivariate series. However, because of the large number of relations to learn between stations compared to the scarcity of samples, it may face overfitting and perform poorly. DISPLAYFORM1 Spatial RNN Finally, we propose a hybrid architecture which mixes the univariate and multivariate ones. As with the Univariate RNN we consider N * S samples x s ∈ R T that are encoded into a singular hidden state. However, there is a single couple (E, D) shared across all the stations -as in the Multivariate RNN -that allows to take into account the correlations between the stations and greatly reduces the number of weights. This time, spatial context is explicitly learned in the form of a matrix of spatial embeddings Z S ∈ R S×λs, hence the name Spatial RNN. For each station s, the corresponding embedding z s is concatenated to the observation as in FIG3 where c is the concatenation operation. At time step t, for a station s, the observation x s t ∈ R is concatenated to the embedding z s ∈ R λs. The ing vector and hidden state h s t ∈ R h are encoded via the common encoder E into a hidden state representing only this station. This state is then decoded into a single-valued predictionx DISPLAYFORM2 Observation PredictionLinear decoder D Recurrent encoder E T are concatenated with a vector of embeddings z s ∈ R λs and then processed by a recurrent encoder which computes a hidden state h s t+1 ∈ R h for each t. This state is then decoded into a single prediction. In addition to directly learning spatial context, this architecture offers the possibility to scale to a network of thousands or tens of thousands of stations because the number of recurrent weights is fixed. More generally, learning embeddings greatly helps to reduce dimensionality when dealing with a large number of contextual cases, compared to NARX models. We proposed three different ways to deal with spatial context, one of them being to learn it. A promising way to improve performance is to introduce temporal context in the models. We consider two distinct time scales for temporal context, that are the days of the week and the time of the day. Indeed, the number of logs during one day at a specific time step is expected to be the same from one week to another. We wish to see if the model is able to learn and discover meaningful representations of such temporal entities. Therefore, for each recurrent architectures we add the possibility to concatenate temporal embeddings to the observations. It is noteworthy that the temporal embeddings are shared across every networks i.e. there is one set of embeddings for the entire Univariate architecture, and not one different set per station. Similarly to the way we dealt with spatial context, we could design multivariate and univariate architectures for days and time-steps. However we lack data to learn such models and the overfitting risk would be especially high for the day of the week scale. Day embeddings We first introduce embeddings corresponding to the day of the week, via a matrix (z d) d={1,..,7} ∈ R 7×λ d containing 7 different embeddings. Because we focus on fully-contextual models we only present in Equation 4 the prediction in the Spatial case, but temporal embeddings can be used for the other architectures as well. DISPLAYFORM0 Time-step embeddings Similarly, the number of logs is very dependent on the time of the day, with notable morning and evening peak hours separated by off-peak time. Therefore we learn a matrix of embeddings (z t) t={1,..,T −1} ∈ R T −1×λt. Prediction in the Spatial case is presented in Equation 5. DISPLAYFORM1 These embeddings can be learned using each of the architectures presented before and the two types of temporal embeddings can obviously be combined. An illustration for the Spatial model with day and time embeddings is presented in FIG4.......: Computing predictions for a particular station s using the spatial architecture with temporal context. Given a day d, at each step t, the observed value x s t is concatenated with three embeddings representing the station, the day and the time, respectively z s ∈ R λs, z d ∈ R λ d and z t ∈ R λt. The obtained vector is processed by a recurrent encoder E (common to all stations) to compute a hidden state h s t+1. Finally this vector is decoded into a single predictionx s t+1 We train our models on a data set provided by Ile-de-France Mobilites (the transport agency of the Parisian region). It contains 256,028,548 logs (user, station, time) between October and Decem-ber 2015 across 300 subway stations. We aggregate the logs by windows of 15 minutes, in order to reduce inherent noise in the data and allow tractable computation. From the data set we remove 3 stations which were undergoing planned works during the period. We also pull out 15 days with disturbed traffic pattern and that can be considered as anomalies. Finally we have S = 297 stations and N = 77 days. Those days are split into 70% train and 30% test samples. Splits are stratified with regards to the day of week, meaning that we try to have as many Sundays in train and test splits. In addition, 15% of the train split is kept for validation. In the end, there are 45 days in train split, 8 in validation and 24 in test. Scaling The transportation network is comprised of few hubs and many less crowded stations. The existence of utterly different scales complicates the prediction problem, especially in the multivariate setting. To tackle this problem we rescale the data set, considering each station separately. Note that this also simplifies gradient computation. In more details we apply a two-step procedure:• First, for a station s we calculate the 99.9th percentile and replace all superior or equal values by this bound. This removes the local outliers we missed when skipping some days.• Then the values of s are scaled between -1 and 1 by min-max scaling. Treating each station one bye one prevents more important stations to squeeze the values of minor ones. For these two steps, the percentile and the scaling values are computed on the train set and then applied on the other sets. In this work we use vanilla RNN as well as Gated Recurrent Units networks BID2 for the encoder. Models are trained with pytorch BID11 ) on GPU using the well-known optimizer Adam (Kingma & Ba FORMULA0) with a learning rate of 0.0001 and Mean Squared Error (MSE).To select the best hyperparameters and epoch during training we monitor root MSE applied on descaled predictions of the validation set. Hyperparameters are presented in TAB1 and we use λ s = 80, λ d = 4 and λ t = 30 for embeddings' sizes. Experiments are run with 5 different random seeds to compute standard deviation of the error. A strong baseline is constructed by averaging previous values given their context. Dealing with a similar application of tap-in logs forecasting, BID12 propose a Bayesian network but performs slightly worse than the average baseline. Indeed, the considered series are very stationary and heavily depend on the context. The baseline model is a tensor of predictions of size 7×S ×T, where the first dimension corresponds to each day of the week. For a specific day d, station s and time-step t, the average baseline is equal DISPLAYFORM0, D being a look-up table from date stamp to day. This model is only based on domain expert knowledge and contextual information. Unlike machine learning models, it cannot adapt to anomalous behaviors but it is context aware. RMSE of the different architectures before the addition of temporal context are presented in TAB2. All recurrent models, using RNN or GRU, significantly outperform the baseline. In particular, we check in FIG5 that the models learn more than the average behavior by plotting predictions during November 4th. An anomaly seems to occur during the day, disturbing the baseline while recurrent models precisely fit to the unusually low traffic. This means that the proposed models learned the dynamics of the time-series and are robust to unseen values. is not a particular day by itself but an anomaly seems to have happened. The baseline mispredicts while our recurrent models correctly fit with the ground truth. We assumed that it would be beneficial to combine the dynamics of recurrent models with the temporal contextual information used in the baseline. To that end we learned day and time embeddings within the previous models and present the in TAB3. Since RNN and GRU performed similarly we chose to display only GRU . The first column corresponds to the previous . With the exception of day embeddings for Multivariate and Univariate GRU, the addition of temporal context benefits all models. Interestingly, the combination of time and day embeddings for these two architectures is better than time embeddings alone. On the opposite, the Spatial model benefits more from day embeddings. In the previous experiments we focused on predicting one value from each observation. However, we would like our model to deliver predictions in a window wider than 15 minutes. Therefore, for each step t, after the model has generated prediction t + 1, we feed it with this prediction in order to get value at t + 2, etc. Obviously the errors made at a previous step will propagate and the prediction will degrade, ing in a increase in loss. In FIG6 we plot this loss evolution against the number of time-steps predicted. We find that the addition of temporal embeddings noticeably improves the quality of predictions until a further horizon. While vanilla models perform similarly or worse than the baseline after one hour of predictions, augmented models adopt a concave curve deteriorate much slower. In particular, the addition of temporal embedding to the Spatial model allows to double the horizon during which it beats the baseline. As a second evidence that temporal context is especially useful when predicting farther in the future, we input p observed values to the model to compute a starting hidden state h p and then feed it only with its own predictions. Results of this experiment are presented in Figure 7 for p = 16, i.e. we input values until 8AM. Figure 7a shows, for each time-step starting from 8AM, the difference between RMSE for the baseline and three recurrent models, averaged on the test set. We observe that the vanilla Multivariate model performs significantly worse than the baseline as the day progresses, especially during peak hours. On the other hand, temporal models tend to converge to the average model. Indeed, when predicting long term sequences, the historical mean is the best estimator in the least square sense. Therefore, spatial and temporal context allow the Day & Time Spatial GRU to predict as well as the baseline with very partial information. Besides, as seen in FIG6, it is even better for around one hour after the last observed value was inputted. In Figure 7b, the new protocol is applied to the same disrupted sample as in FIG5 and in this particular case, the baseline is not a good estimator. On the opposite, contextual models are able to detect from the first 4 hours of information that the traffic is disrupted and that they should diverge from the baseline. Even in this unusual case, temporal context entails competitive long-term predictions. In this paper we presented a novel idea for time-series forecasting with contextual features. It consists in learning contextual information that strongly conditions the observed phenomenom within a recurrent neural network. We applied this general idea to the concrete case of transportation logs forecasting in the subway and observed significant improvement of the prediction error when using spatial and temporal context. In particular, the proposed framework performs significantly better in one-step ahead prediction and remains competitive in long-term forecasting. In a very applicated perspective, robust recurrent models could be used in the case of anomalies to accurately predict traffic recovery and help users adapt their behavior. Figure 7: Predictions for the test set are computed using only he 16th first values of each day, i.e. until 8AM and we plot: (a) the average RMSE difference between the baseline and some proposed models for every time-step. 0 corresponds to the baseline performance & (b) the predicted logs for the same day and place than in FIG5.
In order to forecast multivariate stationary time-series we learn embeddings containing contextual features within a RNN; we apply the framework on public transportation data
916
scitldr
The ability of an agent to {\em discover} its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game to win, or driving safely. Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve single human-specified task learning, transfer of learning, and the agent's learned representation of the world. In all these examples, the agent was instructed what to learn about. We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent's representation of the world. Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions. We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim. We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously. The idea that an agent's knowledge might be represented as predictions has a long history in machine learning. The first references to such a predictive approach can be found in the work of BID4, BID2 BID6 who hypothesized that agents would construct their understanding of the world from interaction, rather than human engineering. These ideas inspired work on predictive state representations (PSRs), as an approach to modeling dynamical systems. Simply put, a PSR can predict all possible interactions between an agent and it's environment by reweighting a minimal collection of core test (sequence of actions and observations) and their predictions, as an alternative to keeping a finite history or learning the one step latent dynamics of the world, as in a POMDP. Extensions to high-dimensional continuous tasks have demonstrated that the predictive approach to dynamical system modeling is competitive with state-of-the-art system identification methods BID9. One important limitation of the PSR formalism is that the agent's internal representation of state must be composed exclusively of predictions. Recently, BID23 introduced a formalism for specifying and learning large collections of predictions using value functions from reinforcement learning. These General Value Functions (GVFs), can represent a wide array of multi-step state contingent predictions BID19, while the predictions made by a collection of GVFs can be used to construct the agent's state representation (; BID29 . State representation's constructed from predictions have been shown to be useful for reward maximization tasks BID21), and transfer learning . One of the great innovation of GVFs is that we can clearly separate the desire to learn and make use of predictions in various ways, from the construction of the agent's internal state representation. For example, the UNREAL learning system BID10, learns many auxiliary tasks (formalized as GVFs) while using an actor-critic algorithm to maximize a single external reward signal-the score in an Atari game. The auxiliary GVFs and the primary task learner share the same deep convolutional network structure. Learning the auxiliary tasks in a better state representation than simply learning the main task alone. GVFs allow an agent to make use of both increased representational power of predictive representations, and the flexibility of state-of-the-art deep learning systems. In all the works described above, the GVFs were manually specified by the designer; an autonomous agent, however, must discover these GVFs. Most work on discovery has been on the related topics of temporal difference (TD) networks BID15 and options BID11 BID17 BID16 BID28 BID16 BID1. Discovery for options is more related than TD networks, because similarly to a GVF, an option specifies small sub-tasks within an environment. Option discovery, however, has been largely directed towards providing temporally abstract actions, towards solving the larger task, rather than providing a predictive representation. For example, BID1 formulated a gradient descent update on option parameters-policies and termination functions-using policy gradient objectives. The difficulty in extending such gradient-based approaches is in specifying a suitable objective for prediction accuracy, which is difficult to measure online. We take inspiration from ideas from representation search methods developed for neural networks, to tackle the daunting challenge of GVF discovery for predictive representations. Our approach is inspired by algorithms that search the topology space of neural networks. One of the first such approaches, called the cascade correlation network learning typifies this approach BID7 ). The idea is to continually propose new hidden units over time, incrementally growing the network to multiple levels of abstraction. To avoid the computation and memory required to pose units whose activation is de-correlated with the network activations, BID24 empirically demonstrated that simply generating large numbers of hidden units outperformed equal sized fixed networks in online supervised learning problems. Related approaches demonstrated that massive random representations can be highly effective (; BID0 BID8 . This randomized feature search can be improved with the addition of periodic pruning, particularly for an incremental learning setting. In this paper, we demonstrate such a curation framework for GVF discovery, with simple algorithms to propose and prune GVFs. To parallel representation search, we need both a basic functional form-a GVF primitive-for each unit in the network and an update to adjust the weights on these units. We propose a simple set of GVF primitives, from which to randomly generate candidate GVFs. We develop a regularized updating algorithm, to facilitate pruning less useful GVFs, with a stepsize adaptation approach that maintains stability in the representation. We demonstrate both the ability for the regularizer to prune less useful GVFs-and the corresponding predictive features-as well as utility of the GVF primitives as predictive features in several partially observable domains. Our approach provides a first investigation into a framework for curation of GVFs for predictive representations, with the aim to facilitate further development. The setting considered in this paper is an agent learning to predict long-term outcomes in a partially observable environment. The dynamics are driven by an underlying Markov decision process (MDP), with potentially (uncountably) infinite state-space S and action-space A, and transitions given by the density P: S × A × S → [0, ∞). The agent does not see the underlying states, but rather only observes observation vector o t ∈ O ⊂ R m for corresponding state s t ∈ S. The agent follows a fixed behaviour policy, µ: S × A → [0, ∞), taking actions according to µ(s t, ·). Though we write this policy as a function of state, the behaviour policy is restricted to being a function of input observations-which is itself a function of state-and whatever agent state the agent constructs. The agent aims to build up a predictive representation, p t = f (o t, p t−1) ∈ R n for some function f, as a part of its agent state to overcome the partial observability. GVF networks provide such a predictive representation, and are a generalization on PSRs and TD networks. A General Value Function (GVF) consists of a target policy π: S × A → [0, ∞), discount function γ: S × A × S → and cumulant r: S × A × S → R. The value function v π: S → R is defined as the expected return, G t, where DISPLAYFORM0 A GVF prediction, for example, could provide the probability of hitting a wall, if the agent goes forward, by selecting a policy that persistently takes the forward action, a cumulant that equals 1 when a wall is hit and 0 otherwise and a discount of 1, until termination. We direct the reader to prior work detailing the specification and expressiveness of GVFs BID23; we also provide further examples of GVFs throughout this paper. Figure 1: The inputs-observations o t+1 and GVF predictions p t from the last time step-pass through a nonlinear expansion, such as a fixed neural network or tile coding, producing the feature vector p t+1. The feature vector is weighted linearly to produce the next set of predictions p t+1. This approach decouples the specification of the representation for the learner, which consist of both observations and predictive features, and the updating algorithm. Such a framework could be modified to include a history of observations; for simplicity here, we only consider using predictions to overcome partial observability and do not maintain histories of observations. A GVF network consists of a set of GVFs DISPLAYFORM1 where the prediction vector p t ∈ R n consists of the approximate value functions DISPLAYFORM2 and is computed as a function of the current observation and predictions of the previous step p = f (o t, p t−1). This network is depicted in Figure 1. GVF networks can encode both PSRs and TD networks, providing a general formalism for predictive representations; for space, we more thoroughly describe the relationships to these approaches in Appendix A.There has been some work towards the goal of learning f to provide predictions p t, in the original papers on TD networks and TD networks with options and with a following algorithm using recurrent gradient updates . Additionally, there has been quite a bit of work in off-policy learning, with gradient temporal difference (GTD) algorithms BID22 BID13, which can be used to estimate the value functions for a GVF policy π, from experience generated by a different behaviour policy µ. We leverage these works, to develop a new updating approach for GVF networks in the next section. When learning in a partially observable domain, the agent inherently faces a non-stationary problem. From the agent's perspective, a similar observation is observed, but the target may vary significantly, because hidden variables are influencing the outcome. For such settings, tracking has been shown to be critical for accuracy BID25, even when the underlying distribution is in fact stationary. Tracking-continually updating the weights with recent experience-contrasts the typical goal of convergence; much of the previous algorithm development, however, has been towards the aim of convergence .We propose treating the learning system as a dynamical system-where the weight update is based on stochastic updates known to suitably track the targets-and consider the choice of stepsize as the inputs to the system to maintain stability. Such updates have been previously considered under adaptive gain for least-mean squares (LMS) BID3, Chapter 4) , where weights are treated as state following a random drift. These approaches, however, are designed particularly for the LMS update and so do not extend to the off-policy temporal difference learning algorithms needed to learn GVFs. To generalize this idea to other incremental algorithms, we propose a more general criteria based on the magnitude of the update. Consider a generic update DISPLAYFORM0 where ∆ t ∈ R d is the update for this step, for weights w t ∈ R d and constant stepsize α. Typically the update includes a normalization constant c t, dependent on the norm of the features and target. For example, for normalized LMS predicting target y t from observation vector o t, ∆ t = o t (y t − o t w t) and c t = o t o t 2 2 + an estimate of the variance of the noise in the targets. Such normalization ensures the update appropriately reflects descent direction, but is invariant to scale of the features and targets. The weights w t evolve as a function of the previous weights, with stepsize α acting as a control input for how this system evolves. A criteria for α to maintain stability in the system is to keep the norm of the update small DISPLAYFORM1 for a small > 0 that provides a minimum stepsize. The update ∆ t (α) on this time step is dependent on the stepsize α, because that stepsize influences w t and past updates. The expected value is over all possible update vectors ∆ t (α), for the given stepsize and assuming the system started in some w 0. If α is small enough to ensure updates are bounded, and policy π and MDP satisfy the standard requirements for ergodicity, a stationary distribution exists, with ∆ t (α) not dependent on the initial w 0 and instead only driven by the underlying state dynamics and target for the weights. In the next sections, we derive an algorithm to estimate α for this dynamical system, first for a general off-policy learning update and then when adding regularization. We call this algorithm AdaGain: Adaptive Gain for Stability. We generically consider an update ∆ t in that includes both TD and GTD. Before deriving the algorithm, we demonstrate concrete updates for the stepsize. For TD, the update is DISPLAYFORM0 where α 0 = 1.0, ψ 1 = 0,ᾱ is a meta-stepsize and β is a forgetting parameter to forget old gradient information (e.g., β = 0.01). The operator (·) thresholds any values below > 0 to (e.g., = 0.001), ensuring nonzero stepsizes. Another canonical algorithm for learning value functions is GTD(λ), with trace parameter DISPLAYFORM1 with importance sampling correction ρ t DISPLAYFORM2 For the auxiliary weights h t -which estimate a part of the GTD objective-we use a small, fixed stepsize α (h) t = 0.01, previously found to be effective BID30.We consider the derivation more generally for such temporal difference methods, where both TD and GTD arise as special cases. Consider any update of the form DISPLAYFORM3 for vectors e t, u t not dependent on w t. For GTD(λ), u t = −γ t+1 (1 − λ t+1)x t+1. We minimize using stochastic gradient descent, with gradient for one sample of the norm of the update DISPLAYFORM4 where d t def = γ t+1 x t+1 − x t and we can recursively define ∂wt ∂α as DISPLAYFORM5 We obtain a similar such recursive relationship for DISPLAYFORM6 where the last line follows from the fact that DISPLAYFORM7 This recursive form provides a mechanism to avoid storing all previous samples and eligibility traces, and still approximate the stochastic gradient update for the stepsize. Though the above updates are exact, when implementing such a recursive form in practice, we can only obtain an estimate of ψ t, if we want to avoid storing all past data. In particular, when using ψ t−1 computed on the last time step t − 1, this gradient estimate is in fact w.r.t. to the previous stepsize α t−2, rather than α t−1. Because these stepsizes are slowly changing, this gradient still provides a reasonable estimate of the actual ψ t−1 for the current stepsize. However, for many steps into the past, these accumulates gradients in ψ t andψ t are inaccurate. For example, even if the stepsize is nearing the optimal value, ψ t will include larger gradients from the first step when the stepsizes where inaccurate. To forget these outdated gradients, we maintain an exponential moving average, which focuses the accumulation of gradients in ψ t to a more recent window. The adjusted update with forgetting parameter 0 < β < 1 gives the recursive form for ψ t+1 andψ t+1 in. A regularized GTD update for the weights can both reduce variance from noisy predictions and reduce weight on less useful features to facilitate pruning. To add regularization to GTD, for regularizer R(w) and regularization parameter η ≥ 0, we can use proximal updates BID14, DISPLAYFORM0 where prox αηR is the proximal operator for function αηR. The proximal operator acts like a projection, first updating the weights according to the GTD objective and then projecting the weights back to a solution that appropriately reflects the properties encoded by the regularizer. A proximal operator exists for our proposed regularizer, the clipped 2 regularizer DISPLAYFORM1 where > 0 is the clipping threshold above which w i has a fixed regularization. Though other regularizers are possible, we select this clipped 2 regularizer for two reasons. The clipping ensures that high magnitude weights are not prevented for being learned, and reduces bias from shrinkage. Because the predictive representation requires accurate GVF predictions, we found the bias without clipping prevented learning. Additionally, we chose 2 in the clipping, rather than 1, because the clipping already facilitates pruning, and does not introduce additional non-differentiability. The regularization below still prefers to reduce the magnitude of less useful features. For example, if two features are repeated, such a regularizer will prefer to have a higher magnitude weight on one feature, and zero weight on the other; no regularizer, or 2 without clipping, will needlessly use both features. We provide the derivation-which closely parallels the one above-and updates in Appendix B. In this section we show that AdaGain with regularization (AdaGain-R) reduces the weights on less useful features. This investigation demonstrates the utility of the algorithm for pruning features, and so also for pruning proposed GVFs. We first test AdaGain-R on a robot platform-which is partially observable-with fixed image features. We provide additional experiments in two micro-worlds in Appendix C for pruning GVFs. The first experiment is learning a GVF prediction on a robot, using a nonlinear expansion on input pixels. The robotic platform is a Kabuki rolling robot with an added ASUS XtionPRO RGB and Depth sensor. The agent receives a new image every 0.05 seconds and 100 random pixels are sampled to construct the state. Each pixel's RGB values are tiled with 4 tiles and 4 tilings on each colour channel, ing in 4800 bit values. A bias bit is also included, with a value of 1 on each time step. The fixed behaviour policy is to move forward until a wall is hit and then turn for a random amount of time. The goal is to learn the value function for a policy that always goes forward; with a cumulant of 1 when the agent bumps a wall and otherwise 0; and a discount of 0.97 everywhere except when the agent bumps the wall, ing in a discount of 0 (termination). The GVF is learned off-policy, with GTD(λ) and AdaGain-R with the GTD(λ) updates. Both GTD(λ) and AdaGain-R receive the same experience from the behaviour policy. Results are averaged over 7 runs. Our goal is to ascertain if AdaGain-R can learn in this environment, and if the regularization enables it to reduce magnitude on features without affecting performance. FIG1 (a) depicts a sample trajectory of the predictions made by both algorithms, after about 40k learning steps; both are able to track the return accurately. This is further emphasized in FIG1 (b), where averaged error decreases over time. Additionally, though they reach similar performance, AdaGain-R only has significant magnitude on about half of the features. Our approach to generating a predictive representation is simple: we generate a large collection of GVFs and iteratively refine the representation through replacing the least used GVFs with new GVFs. In the previous section, we provided evidence that our AdaGain-R algorithm effectively prunes features; we now address the larger curation framework in this section. We provide a set of GVF primitives, that enable candidates to be generated for the GVF network. We demonstrate the utility of this set, and that iteratively pruning and generating GVFs in our GVF network builds up an effective predictive representation. To enable generation of GVFs for this discovery approach, we introduce GVF primitives. The goal is to provide modular components that can be combined to produce different structures. For example, within neural networks, it is common to modularly swap different activation functions, such as sigmoidal or tanh activations. For networks of GVFs, we similarly need these basic units to enable definition of the structure. We propose basic types for each component of the GVF: discount, cumulant and policy. For discounts, we consider myopic discounts (γ = 0), horizon discounts (γ ∈) and termination discounts (the discount is set to 1 everywhere, except for at an event, which consists of a transition (o, a, o)). For cumulants, we consider stimuli cumulants (the cumulant is one of the observations, or inverted, where the cumulant is zero until an observation goes above a threshold) and compositional cumulants (the cumulant is the prediction of another GVF). We also investigate random cumulants (the cumulant is a random number generated from a zero-mean Gaussian with a random variance sampled from a uniform distribution); we do not expect these to be useful, but they provide a baseline. For the policies, we propose random policies (an action is chosen at random) and persistent policies (always follows one action). For example, a GVF could consist of a myopic discount, with stimuli cumulant on observation bit one and a random policy. This would correspond to predicting the first component of the observation vector on the next step, assuming a random action is taken. As another example, a GVF could consist of a termination discount, an inverted stimuli cumulant for observation one and a persistent policy with action forward. If observation can only be'0' or'1', this GVF corresponds to predicting the probability of seeing observation one changing to'0' (inactive) from'1' (active), given the agent persistently drives forward. We conduct experiments on our discovery approach for GVF networks in Compass World BID20, a partially observable grid-world where the agent can only see the colour immediately in front of it. There are four walls, with different colours; the agent observes this colour if it takes the action forward in front of the wall. Otherwise, the agent just sees white. There are five colours in total, with one wall having two colours and so more difficult to predict. The observation vector is fivedimensional, consisting of an indicator bit if the colour is observed or not. We test the performance of the learned GVF network for answering five difficult GVF predictions about the environment, that cannot be learned using only the observations. Each difficult GVF prediction corresponds to a colour, with the goal to predict the probability of seeing that colour, if the agent always goes forward. These GVFs are not used as part of the representation. A priori, it is not clear that the GVF primitives are sufficient to enable prediction in the Compass World, particularly as using just the observations in this domain enables almost no learning of these five difficult GVFs. The GVFs for the network are generated uniformly randomly from the set of GVF primitives. Because the observations are all one bit (0 or 1), the stimuli cumulants are generated by selecting a bit index i (1 to 5) and then either setting the cumulant to that observation value, o i, or to the inverse of that value, 1 − o i. The events for termination are similarly randomly generated, with the event corresponding to a bit o i flipping. The nonlinear transformation used for this GVF network is the hyperbolic tangent. Every two million steps, the bottom 10% of the current GVFs are pruned and replaced with newly generated GVFs. Results are averaged over 10 runs. FIG2 demonstrates that AdaGain-R with randomly generated GVF primitives learns a GVF network-and corresponding predictive representation-that can accurately predict the five difficult GVFs. The show that with as few as 100 GVFs in the network, accurate predictions can be learned, though increasing to 200 is a noticeable improvement. The also indicate that random cumulants were of no benefit, as expected, and that our system appropriately pruned those GVFs. Finally, compositional GVFs were particularly beneficial in later learning, suggesting that the system started to make better use of these compositions once the GVFs became more accurate. In this paper, we proposed a discovery methodology for GVF networks, to learn of predictive representations. The strategy involves iteratively generating and pruning GVF primitives for the GVF network, with a new algorithm called AdaGain to promote stability and facilitate pruning. The demonstrate utility of this curation strategy for discovering GVF networks. There are many aspects of our system that could have been designed differently, namely in terms of the learning algorithm, generation approach and pruning approach; here, our goal was to provide a first such demonstration, with the aim to facilitate further development. We discuss the two key aspects in our system below, and potential avenues to expand along these dimensions. In the development of our learning strategy, we underline the importance of treating the predictive representation as a dynamical system. For a standard supervised learning setting, the representation is static: the network can be queried at any time with inputs. The predictive representations considered here cannot be turned off and on, because they progressively build up accurate predictions. This dynamic nature necessitates a different view of learning. We proposed a focus on stability of the predictive system, deriving an algorithm to learn a stepsize. The stepsize can be seen as a control input, to stabilize the system, and was obtained with a relatively straightforward descent algorithm. More complex control inputs, however, could be considered. For example, a control function outputting a stepsize based on the current agent state could be much more reactive. Such an extension would the necessitate a more complex stability analysis from control theory. Our discovery experiments reflect a life-long learning setting, where the predictive representation is slowly built up over millions of steps. This was slower than strictly necessary, because we wanted to enable convergence for each GVF network before culling. Further, the pruning strategy was simplistic, using a threshold of 10%; more compact GVF networks could likely be learned-and learned more quickly-with a more informed pruning approach. Nonetheless, even when designing learning with less conservative learning times, building such representations should be a long-term endeavour. A natural next step is to more explicitly explore scaffolding. For example, without compositional GVFs, myopic discounts were less frequently kept; this suggests initially preferring horizon and termination discounts, and increasing preference on myopic discounts once compositional GVFs are added. Further, to keep the system as simple as possible, we did not treat compositional GVFs differently when pruning. For example, there is a sudden rise in prediction error at about 80 million steps in FIG2 (b); this was likely caused by pruning a GVF whose prediction was the cumulant for a critical compositional GVF. Finally, we only considered a simple set of GVF primitives; though this simple set was quite effective, there is an opportunity to design other GVF primitives, and particularly those that might be amenable to composition. There have been several approaches proposed to deal with partial observability. A common approach has been to use history: the most recent p observations o t−1,..., o t−p. For example, a blind agent in the middle of an empty room can localize itself using a history of information. Once it reaches a wall, examining its history can determine how far away it is from a wall. This could clearly fail, however, if the history is too short. Predictive approaches, like PSRs and TD networks, have been shown to overcome this issue. Further, PSRs have been shown to more compactly represent state, than POMDPs BID12.A PSR is composed of a set of action-observations sequences and the corresponding probability of these observations occurring given the sequence of actions. The goal is to find a sufficient subset of such sequences (core tests) to determine the probability of all possible observations given any action sequence. PSRs have been extended with the use of options BID31, and discovery of the core tests BID18. A PSR can be represented as a GVF network by using myopic γ = 0 and compositional predictions. For a test a 1 o 1, for example, to compute the probability of seeing o 1, the cumulant is 1 if o 1 is observed and 0 otherwise. To get a longer test, say a 0 o 0 a 1 o 1, a second GVF can be added which predicts the output of the first GVF (i.e., the probability of seeing o 1 given a 1 is taken), with fixed action a 0. This equivalence is only for computing probabilities of sequences of observations, given sequences of actions. GVF networks specify the question, not the answer, and so GVF networks do not encompass the discovery methods or other nice mathematical properties of PSRs, such as can be obtained with linear PSRs. A TD network is similarly composed of n predictions on each time step, and more heavily uses compositional questions to obtain complex predictions. Similarly to GVF networks, on each step, the predictions from the previous step and the current observations are used for this step. The targets for the nodes can be a function of the observation, and/or a function of another node (compositional). TD networks are restricted to asking questions about the outcomes from particular actions, rather than about outcomes from policies. TD networks with options BID21 BID20 were introduced, to generalize to temporally extended actions. TD networks with options are almost equivalent to GVF networks, but have small differences due to generalizations to return specifications-in GVFs-since then. For example, options have terminating conditions, which corresponds to having a fixed discount during execution of the option and a termination discount of 0 at the end of the options. GVFs allow for more general discount functions. Additionally, TD networks, both with and without options, have a condition function. The generalization to policies, to allowing action-values to be learned rather than just value functions and the use of importance sampling corrections, encompasses these functions. The key differences, then, between GVF networks and TD networks is in how the question networks are expressed and subsequently how they can be answered. GVF networks are less cumbersome to specify, because they use the language of GVFs. Further, once in this language, it is more straightforward to apply algorithms designed for learning GVFs. There are some algorithmic extensions to TD networks that are not encompassed by GVFs, such as TD networks with traces BID27. A proximal operator for a function R with weighting αη is defined as DISPLAYFORM0 2 + αηR(u) Though proximal gradient algorithsm are typically considered for convex regularizers, the proximal gradient update can be applied for our nonconvex regularizer because our proximal operator has a unique solution BID32. The proximal operator for the clipped 2 regularizer is defined element-wise, for each entry in w: DISPLAYFORM1 The derivation of AdaGain with a proximal operator is similar to the derivation of AdaGain without regularization. The only difference is in the gradient of the weights, ψ t = ∂ ∂α w t, with no change in the gradients of δ t and of h t (i.e.,ψ t). Because the proximal operator has non-differentiable points, we can only obtain a subderivative of the proximal operator w.r.t. to the stepsize. For gradient descent, we do need to more carefully use this subderivative; in practice, however, using a subderivative within the stochastic gradient descent update seems to perform well. We similarly found this to be the case, and so simply use the subderivative in our stochastic gradient descent update. To derive the update, letw = w t + α∆ t be the weights before applying the proximal operator. The subderivative of the proximal operator w.r.t. α, which we call dprox αηR, is DISPLAYFORM2 The proximal operator usesw t,i andψ t,i for the element-wise update. The ing updates, including the exponential average with forgetting parameter β, is DISPLAYFORM3: otherwise with ψ 1 =ψ 1 =ψ 1 = 0. The initial step size for GTD(λ) and AdaGain-R was chosen to be 0.1, with the step size of GTD(λ) normalized over the number of active features (same as AdaGain-R's normalization factor). AdaGain-R has a regularization parameters τ = 0.001, η = 1 normalized over the number of active features, and a meta-stepsizeᾱ = 1.0. When generating questions on the fly it is hard to know if certain questions will be learnable, or if their answers are harmful to future learning. To investigate the utility of our system for ignoring dysfunctional predictive features within the network, we conducted a small experiment in a six state cycle world domain BID26. Cycle World consists of six states where the agent progresses through the cycle deterministically. Each state has a single observation bit set to zero except for a single state with an observation of one. We define seven GVFs for the GVF network to learn in Cycle World. Six of the GVFs correspond to the states of the cycle. The first of these GVFs has the observation bit as its cumulant, and a discount of zero. The goal for this GVF prediction is to predict the observation bit it expects to see on the next time step, which must be a'0' or a'1'. The second GVF predicts the prediction of this first GVF on the next steps: it's cumulant is the prediction from the first GVF on the next step. For example, imagine that on this step the first GVF accurately predicts a'1' will be observed on the next step. This means that on the next step, to be accurate the first GVF should predict that a'0' will be observed. Since the second GVF gets this prediction for the next time step, its target will be a'0' for this time step, and it will be attempting to predict the observation bit in two time steps. Similar, the cumulant for the third GVF is the second GVFs prediction on the next time step, and so it is aiming Figure 4: The progression of improved stability, with addition of components of our system. Even without a nonlinear transformation, AdaGain can maintain stability in the system (in (a)), though the meta-step size needs to be set more aggressively small. Interestingly, the addition of regularization significantly improved stability, and even convergence rate (in (b)). The seventh GVF is used more aggressively after Phase 1-once it becomes useful. The addition of a nonlinear transformation (in (c)) then finally lets the system react quickly once the seventh GVF becomes accurate. Again, without regularization, the magnitude of weights is more spread out, but otherwise the performance is almost exactly the same as (c). For both (b) and (c), the meta-parameters are η = 0.1, = 0.01 normalized with the number of features.to predict the observation bit in three time steps. The fourth, fifth and sixth GVFs correspondingly aim to predict the observation bit in four, five and six time steps respectively. These six GVFs all have a discount of 0, since they are predicting immediate next observations. Finally, the seventh GVF reflects the likelihood of seeing a'1' within a short horizon, where the cumulant is the observation bit on the next step and the discount is 0.9 when the observation is'0' and 0.0 when the observation is'1'. With only the first six GVFs in the GVF network, the network cannot learn to make accurate predictions BID26 ). By adding the seventh GVF, the whole network can reach zero error; we add this GVF, therefore, to make this domain suitable to test the algorithms. Though not the intended purpose of this experiment, it is illuminating to see the immediate benefits of the expanded flexibility of the language of GVFs beyond TD networks. In these experiments we want to measure our system's ability to stabilize learning in a situation where a GVF in the representation is dysfunctional. To simulate the circumstance of a harmful question, we replace the seventh GVF-the critical GVF for reducing error-with random noise sampled from a Gaussian of mean and variance 0.5. When this first phase is complete, after 50k time steps, we replace the noise with the unlearned critical GVF and measure the prediction error of the system. From Figure 4, we demonstrate that AdaGain enables stable learning under this perturbed learning setting. When the seventh GVF contained noise, AdaGain quickly dropped the step size for the others GVFs to the lower threshold. This also occurred when learning without the seventh GVF, and would prevent the instability seen in Cycle World in previous work BID26 ). After Phase 1, once the seventh GVF begins to learn, the step-sizes are increased. In this case, the addition of a regularizer also seems to improve stability. For this experiment, there is not as clear a separation between the adapting the step-size for stability and using regularization to prune features; in fact, they both seem to play this role to some extent. The overall system, though, effectively handles this dysfunctional feature. We demonstrate the ability of our system to put higher preference on more useful GVFs in the compass world domain, and the effect of pruning less used GVFs. We construct a network with 45 expert GVFs defined in BID20, and 155 GVFs which produce noise sampled from a gaussian of mean 0 and variance randomly select by a uniform distribution. We prune 20 GVFs every two million steps based on the average magnitude of the feature weights, all other parameters are the same for the experiments above. Because the handcrafted expert GVFs contain useful information they should be used more by our system. Similarly to the experiments in section 5.2, we use the five evaluation GVFs to measure the representation. As we can see in FIG5, AdaGain-R does mostly remove the dysfunctional GVFs first, and when the expert GVFs are pruned the representation isn't damaged until the penultimate prune. These also show how pruning dysfunctional or unused GVFs from a representation is not harmful to the learning task. The instability seen in the ends of learning can be overcome by allowing the system to generate new GVFs to replace those that were pruned and by pruning a small amount based on the size of network used as a representation.
We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent’s representation in partially observable domains.
917
scitldr
Estimating the location where an image was taken based solely on the contents of the image is a challenging task, even for humans, as properly labeling an image in such a fashion relies heavily on contextual information, and is not as simple as identifying a single object in the image. Thus any methods which attempt to do so must somehow account for these complexities, and no single model to date is completely capable of addressing all challenges. This work contributes to the state of research in image geolocation inferencing by introducing a novel global meshing strategy, outlining a variety of training procedures to overcome the considerable data limitations when training these models, and demonstrating how incorporating additional information can be used to improve the overall performance of a geolocation inference model. In this work, it is shown that Delaunay triangles are an effective type of mesh for geolocation in relatively low volume scenarios when compared to from state of the art models which use quad trees and an order of magnitude more training data. In addition, the time of posting, learned user albuming, and other meta data are easily incorporated to improve geolocation by up to 11% for country-level (750 km) locality accuracy to 3% for city-level (25 km) localities. Advancements in deep learning over the past several years have pushed the bounds of what is possible with machine learning beyond typical image classification or object localization. Whereas prior work in computer vision and machine learning has focused primarily on determining the contents of the image-"is there a dog in this photo?"-more recent methods and more complicated models have enabled deeper examinations of the contextual information behind an image. This deeper exploration allows a researcher to ask more challenging questions of its models. One such question is, "where in the world was this picture taken?" However, estimating the geographic origin of a ground-level image is a challenging task due to a number of different factors. Firstly, the volume of available images with geographic information associated with them is not evenly distributed across the globe. This uneven distribution increases the complexity of the model chosen and complicates the design of the model itself. Furthermore, there are additional challenges associated with geo-tagged image data, such as the potential for conflicting data (eg. incorrect geolabels and replica landmarks -St Peters Basilica in Nikko, Japan) and the ambiguity of geographic terms (eg. imagery of similar and ambiguous geological features, such as beaches, screenshots of websites, or generally, plates of food).The work presented herein focuses on the task of content-based image geolocation: the process of identify the geographic origin of an image taken at ground level. Given the significant growth in the production of data, and trends in social data to move away from pure text-based platforms to more images, videos, and mixed-media, tackling the question of inferring geographic context behind an image becomes a significant and relevant problem. Whereas social textual data may be enriched with geolocation information, images may or may not contain such information; EXIF data is often stripped from images or may be present but incomplete. Within the context of mixed-media data, using the geolocation information associated with a linked object may be unreliable or not even applicable. For example, a user on Twitter may have geolocation information enabled for their account, but may post a link to an image taken from a location different from the one where they made their post. A wide variety of approaches have been considered for geolocation from image content, please consult Brejcha &Čadík for a review. The approach taken in this paper builds on recent work in global geolocation from ground-based imagery BID19 for global scale geoinferencing. BID19 utilize a multi-class approach with a one-hot encoding. A common approach, as well, is to perform instance-level scene retrieval in order to perform geolocation of imagery BID6 BID18. These works query on previously geotagged imagery and assign a geolabel based on the similarity of the image query to the database. Further, BID18 builds on work by BID6; BID19 by utilizing the feature maps for the mesh-based classifier for the features of their nearest-neighbors scene retrieval approach. Prior work exists for data sampling strategies for large-scale classification problems for social media applications. considers weighted sampling of minority classes. In this work, the class selection is biased during training of the deep learning models. Biasing class selection with random noising (often called image regularization) is a well known way to allow a model to see more examples of rare classes, but also there are additional concerns related specifically to social media applications. For example, researchers in consider the case of sampling such that an individual user is only seen once per a training epoch. In this work, sampling is performed without respect to user, so that images are selected in epochs completely at random, but the broader influence of latent variables, other than user, and communities is of concern in social media geolocation. The first type of model considered in this work is purely geolocation for image content (M1). The use of time information forms a second set of models within this work (M2). User-album inputs form a third set of models within this work (M3). Our work contributes meaningful consideration of the use of an alternative mesh for geolocation in M1 and demonstrates the meaningful use of time and user information to improve geolocation (M2 and M3). Collected data holdings are derived from YFCC100M BID17, where training and validation was performed on a randomly selected 14.9M (12.2M training/2.7M validation) of the 48.4M geolabeled imagery. PlaNet, in comparison, used 125M (91M training/34M validation) images BID19 for model developemt. In using the data, it is assumed that the ground truth GPS location is exactly accurate, which is a reasonable approximation based on the of BID5.Every YFCC100M image has associated meta data. Importantly, these data contain user-id and posted-time. User-id is a unique token that groups images by account. Posted time is the time that the user uploaded data to the website that ultimately created the combined storage of all images. The most likely true GPS location for an image from varies by the time the image is uploaded to a website, as shown in Figure 1. The approach described in this work for the global-scale geolocation model is similar to the classification approach taken by PlaNet BID19. Spatially the globe is subdivied into a grid, forming a mesh, decribed in Section 3.1.1, of classification regions. An image classifier is to trained Figure 1: True Longitude versus Time of Posting. In these boxplots, we limit users to one image selected at random to decouple any user effects from time of day. What this image implies is that the prior distribution for image longitude rdiffers by the time of day that an image is posted. As an exaple, consider longitudes near 0. It would take less evidence to predict a longitude close to zero for an image posted around 01:00 UTC tahn to make a similar preidction at 21:00 UTC because of the observed priors. At 21:00 UTC, the boxplot indicates images are typically as far from zero longitude as will been seen in the data.to recognize the imagery whose ground truth GPS is contained in a cell and hence, during inference, the mesh ascribes a probability distribution of geo-labels for the input imagery. The classification structure is generated using a Delaunay triangle-based meshing architecture. This differs from the PlaNet approach, which utilizes a quad-tree mesh. Similarly to PlaNet, the mesh cells are generated such that they conserve surface area, but the approach is not as simple as with a quad-tree. The triangular mesh was deployed under the hypothesis that the Delaunay triangles would be more adaptive to the geometric features of the Earth. Since triangular meshes are unstructures, they can more easily capture water/land interfaces without the additional refinement needed by quadtree meshes. However, the triangular mesh loses refinement level information which comes with a structured quad-tree approach, which would allow for granularity to be controlled more simplistically, based which cells contain other cells. In order to control mesh refinment, cells are adapatively refined, or divided, when the cell contains more than some number of examples (refinement limit) or the mesh cell is dropped from being classified by the model (the contained imagery is also dropped from training) if the cell contains less than a number of samples (minimum examples). The options used for generating the three meshes in this paper are shown in Table 1. Parameters for the initialization of the mesh were not explored; each mesh was initialized with a 31 x 31 structured grid with equal surface area in each triangle. The mesh for the a) coarse mesh and b) fine mesh are shown in FIG2.1.1. Table 1: The geolocation classification mesh can be tuned by modifying the refinement level to adjust the maximum and minimum number of cells in each mesh cell. This table shows the meshing parameters used for the three meshes studied in this work. These meshes were selected to cover a reasonable range of mesh structures, with fine P meant to replicate the mesh parameters utilized by PlaNet. However, it should be noted that this isn't direcly comparable since the fine P mesh is generated with both a different methodology and a different dataset. Coarse 8000 1000 538 Fine 5000 500 6565 Fine P 10000 50 4771Figure 2: Coarse mesh and fine mesh classification structure for classification models developed in this paper. The red triangles indicate regions where the number of images per cell meets the criteria for training. The coarse mesh was generated with an early dataset of 2M YFCC100M images, but final training was performed by populating those cells with all training data without additional refinement. The fine and fine P meshes were generated with all available imagery (14M). The fine mesh does a better job at representing geographic regions, for example, the water/land interface of Spain and Portugal becomes evident. However, this is masked with the coarser mesh, which lacks fidelity to natural geographic boundaries. The Inception v4 convolutional neural network architecture proposed in BID16 ) is deployed to develop the mesh-based classification geolocation model (Model M1) presented in this work, which differs from the Inception v3 architecture used in PlaNet BID19. A softmax classification is utilized, similar to the PlaNet approach. One further difference between the work presented here and that of PlaNet is that cells are labeled based the cell centroid, in latitudelongitude. Here an alternate approach is used where the "center-of-mass" of the training data in a containing cell is computed, whereby the geolocation for each cell was specified as the lat/lon centroid of the image population. Significant improvements are expected for coarse meshes, as it can be noticed in FIG2.1.1 for example, that the cell centroid on the coast of Spain is out in the ocean. Therefore, any beach image, as an example, will have an intrinsically higher error that would be otherwise be captured by a finer mesh. This is expecially true for high density population regions. Models are evaluated by calculating the distance between the predicted cell and the ground-truth GPS coordinate using a great-circle distance: DISPLAYFORM0 where x i and y i is the latitude and longitude, respectively, in radians, and R is the radius of the Earth (assumed to be 6372.795 km). An error threshold here is defined as the number of images which are classified within a distance of d. Error thresholds of 1 km, 25 km, 200 km, 750 km, 2500 km are utilized to represent street, city, region, country, and continent localities to remain consistent with previous work BID6 BID19 BID18. As an example, if 1 out of 10 images are geolabeled with a distance of less than 200 km, then the 200 km threshold would be 10%. Every YFCC100M image has associated meta data; importantly these data consist of user id and posted time. Posting time is utilized in model M2, FIG2. Let Z ik be the one-hot encoding of the i th image for the k th class, so that it takes on value 1 if image i belongs to the k th geolocation class. Output N Softmax 1 is only strong evidence for P (Z ik = 1|t i) = P (Z ik = 1), and it could still be the case that DISPLAYFORM0 DISPLAYFORM1. Which is to say, conditioned on the content of an image there could be no dependence on time, but it seems prudent with the evidence in this figure to proceed under the assumption that there is time dependence. The operational research hypothesis for this model (M2) is that there remains a time-dependence after conditioning on image content. To incorporate time, related variables are appended to the output of the geolocation model (M1) to form a new input for M2. Every image has a vector of fit probabilitiesp i from the softmax layer of M1.p i is filtered so that only the top 10 maximum entries remain, and the other entries are set to 0 (p Model M3, FIG2, simultaneously geolocates many images from a single user with an LSTM model. The idea here is to borrow information from other images a user has posted to help aid geolocation. The Bidirectional LSTMs capitalizes on correlations in a single user's images. LSTMs were also considered by BID19, but in PlaNet the albums were created and organized by humans. When a human organizes images they may do so by topic or location. In M3, all images by a single user are organized sequentially in time with no particular further organization. The related research question is: does the success observed by BID19 extend to this less informative organization of images? All images from a user are grouped into albums of size 24. If there are not enough images to fill an album then albums are 0 padded and masking is utilized. During training, a user was limited to a single random album per epoch. Album averaging was considered by (PlaNet). In album averaging, the images in an album are assigned a mesh cell based on the highest average probability across all images. This method increases accuracy by borrowing information across related images. As a control, a similar idea is applied to user images, in which, the location of an image is determined as the maximum average probability across all images associated with the posting user. This assumes that all images from a user are from the same mesh grid cells. In addition with user-averaging, there is no optimization that controls class frequencies to be unbiased. Finally, LSTM on a time-ordered sequence of images was considered (without respect to user). However, we were unable to improve performance significantly past that gained by just adding time to the model, so albums without user are not further considered in this paper. the output of M1 is filtered to output only the top 10 mesh cell probabilities (making it mostly sparse) re-normalized to sum to 1. Training of M2 and M3 was only done on the validation data of M1 (using a new random split of test-validation). Time inputs are concatenated to filtered and normalized outputs at and a new training step is implied. A small abuse of notation is present: M3 (Time-Albums) is properly described as M2 concatenated with M3 layers. Meshing parameters are investigated to understand the sensitivity to mesh adaptation. The for each mesh is shown in TAB1. There is an apparent trade-off between fine-grain and coarsegrain geolocation. The coarse mesh demonstrates improved large granularity geolocation and the fine mesh performs better at finer-granularities, as would be expected. This observation was also noticed in BID18. In addition, the impact of classifying on the centroid of the training data is compared to utilizing the cell centroid for class labels. A dramatic improvement is noticed for the coarse mesh, with only a modest improvement for the fine mesh. BID20 model was used in conjunction with indoor-outdoor label deliminations to filter geolocation inference only on outdoor imagery. Note that the geolocation model was not re-trained on the "outdoor" imagery, this is only conducted as a filtering operation during inference. Results are shown in TAB2. In general, the improvement is quite good, about a 4-8% improvement in accuracy for region/country localities, with a more modest boost in smaller-scale regions. The Im2GPS testing data is utilized to test the model on the 237 images provided by the work of BID6. Results are tabluated in TAB3 for all of the meshes. The imagery centroid classification labels are generated with the YFCC100M training data, yet the performance is still greatly improved when applied to the Im2GPS testing set, demostrating generality with the approach. The performance of the M1 classification model is comparable to the performance of BID19 ) with a factor of ten less training data and far fewer classification regions (6k compared to 21k); the coarse mesh M1 model exceeds the performance of PlaNet for large regions (750 and 2500 km). Use of time improves geolocation in two ways. There is a slight gain of accuracy for using time as indicated in TAB2. 24.20% of images are geolocated to within 200 km as opposed to 23.99% without using time with the coarse mesh, and 12.28% versus 10.49% are within 25 km using the fine mesh. This small persistent advantage can be seen across all error calculations and is statistically significant. There is a measurable difference between the error of the coarse mesh using time (M2) and not using time (M1). There exist a matched pair of errors for each image: e 1 i and e 2 i, where i is a validation image index. The first superscript being the M1 error and the second superscript being M2 error for 1 i and likewise for µ 2. This hypothesis is tested with a Wilcoxon-Signed-Test for paired observations, which makes a minimal number of distributional assumptions. Specifically, normality of the errors is not assumed. The differenceμ 2 −μ 1 = 381 is highly significant (p-value < 10 −16) in favor of using time-inputs, so even though the effect of M2 is small, it is not explained by chance alone. It is the case that the distribution of errors is mean shifted, but it is not uniformly shifted to lower error, nor is it it the case that images are universally predicted closer to the truth. The median of coarse mesh e 1 is 1627 km while the median of e 2 is 1262 km TAB5.Time input models appear to have lower-bias class probabilities. Cross-entropy was optimized in training for both the classification model (M1) and time-inputs models (M2). In each training method the goal is to minimize these class biases. KL-Divergence(p,q) is calculated for each model, where p is the observed class proportions for the true labels in validation, and q is the observed class proportions for the model predicted labels in validation (in both cases, 1 is added to the counts prior to calculating proportions as an adjustment for 0 count issues). The KL-divergence of the model output class frequencies compared to the true class frequencies in validation are in TAB5."User-Averaging" is incorporated into because it is a simple method that appears to be more accurate then predicting individual images with M1 or M2; however, it biases cell count frequency TAB5. In general when using the average probability vector to predict a user's image, there is no guarantee that the class frequencies are distributed similarly to the truth; thus, improved accuracy can come with higher bias which is what is observed. Albums are a much better approach to borrow information across users because built into the training method is a bias reducing cross-entropy optimization, and indeed LSTMs on user albums had the lowest class bias of any model considered. Table 6: Percentage accuracy to specific resolution: Researched models compared at various spatial resolutions. Coarse and fine mesh have 538 and 6565 triangles across the globe (cells), respectively. " Time inputs" indicates time meta information has been concatenated with M1 output. Albums are created using user-id and contain 24 images. Bold is the best observed accuracy in column. Coarse Mesh and Fine Mesh Best Possible are not actual models, but rather the best possible accuracy if every image was given exactly the right class. In the fine mesh the best possible accuracy is incidental, but in the coarse mesh it is a severe limitation for street level accuracy. Conditioning on latent variables can only improve geolocation models. Universally using time of day in the models was observed to increase accuracy and lower bias. Time of day is a weak addition to Inception-like , but it is useful to be as accurate as possible, and it makes statistically significant improvements to geolocation. Both meshes were improved by using time information (M2). This is a that is not surprising, and as a research approach, can be applied to any number of meta data variables that might accompany images. Accounting for indoor/outdoor scenes in images explained variation in validation accuracy. Outdoor only are better than for all images. We suggest as future work that the probability that an image is outdoor could be concatenated to the input of M2. The accompanying research hypothesis is that in the case that an image is indoors, perhaps the model will learn to weight time or other meta data more heavily, or otherwise be able to use that information optimally. Increasing the granularity of a grid reduces accuracy at the country and regional level, while improving accuracy at street and city level accuracy. To be clear though, street level geoinferencing is not practical with a course mesh. This is shown by the best possible accuracy in Table 6, and so it is expected that a fine mesh would do better. On the other hand, there is no reason to assume that a fine mesh has to be better for geolocation at larger resolutions than 25 km, nor is there any explicit way to prove a fine mesh should do no worse than a course mesh. What we observe is that a course mesh is a superior grid for 200 km resolutions. Furthermore, we show that for both the coarse mesh and the fine mesh, using a Delunary triangle-based mesh provides the ability to train accurate models with far fewer training examples than what was previously published. Images were divided at random into training and validation sets of 12.2M and 2.7M images and associated metadata, respectively. Validation data used for M1 was further sub-divided at random into training and validations sets for training time-based models (M2 and M3), so that no data used to train M1 was used to also train M2 and M3. Historically, softmax classification is shown to perform quite poorly when the number of output classes is large BID0 BID12 BID15. During initial experiments with large meshes (> 4,000 mesh regions), a training procedure was developed to circumvent these challenges. Empirically, this procedure worked for training the large models presented in this paper; however, it is not demostrated to be ideal or that all steps are necessary for model training. This approach started by training the model, pre-trained on ImageNet , with Adagrad BID3. Second, the number of training examples were increased each epoch by 6%, with the initial number of examples equal to the number of cells times 200. Third, the classes were biased by oversampling minority classes, such that all classes were initially equally represented. A consequence of this approach, however, is that the minority classes are seen repeatedly and therefore the majority classes have significantly more learned diversity. Fourth, the class bias was reduced after each model had completed a training cycle -previous weights loaded and the model re-trained with a reduced bias. The final model was trained with SGD, using a linearly decreasing learning rate reduced at 4% with each epoch, without class biasing and with the full dataset per epoch. The initial value of the learning rate varied for each model (between 0.004 and 0.02). The values of those hyperparameters were empirically determined. The layers of M2 are described in TAB0. M2 is trained using He intializations (BID9), initial iterations of Adaboost BID4, followed by ADAM at learning rates of 0.005 and 0.001 BID10. Early stopping is used to detect a sustained decrease in validation accuracy BID2 ). The generality of the M1 classification model is demonstrated by performing a query-by-example on the 2K random Im2GPS. An example of an image of the Church of the Savior on Spilled Blood is shown in FIG3. By manual inspection (querying on the bounding box by GPS location), this church was not present in the training data nor in the 2K random dataset BID6. Each image is given categorical indicator variable z ik = 1 if the i th image is in the kth class, 0 otherwise. There exist a latent class distribution p i = P (Z i = 1) which is assumed constant between training, testing, and application. An estimate of this unknown distribution iŝ p z = 1 N j∈Training I(z ij = 1). The second to last layer in all trained networks is assumed to be a fit logit for class i: L i = (L 1i, ..., L Ki) for the i th image where K is the number of classes in the mesh grid. The last layer output from networks is a softmaxp models are compared we prefer the most accurate, but may also tilt toward unbiased models in classification distribution. If training has been done well, it should be the case that the KL-divergence between q and p is low: kp k logp k q k. As a matter of completeness we also consider the entropy of p and q.
A global geolocation inferencing strategy with novel meshing strategy and demonstrating incorporating additional information can be used to improve the overall performance of a geolocation inference model.
918
scitldr
Hierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the , which we call a Variational Homoencoder (VHE), may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases. Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood. With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers. The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters. Learning from few examples is possible only with strong inductive biases. In machine learning such biases can come from hand design, as in the parametrisation of a model, or can be the of a meta-learning algorithm. Furthermore they may be task-specific, as in discriminative modelling, or may describe the world causally so as to be naturally reused across many tasks. Recent work has approached one-and few-shot learning from all of these perspectives. Siamese Networks BID15, Matching Networks, Prototypical Networks BID23 and MANNs BID22 ) are all models discriminatively trained for few-shot classification. Such models can achieve state-of-the-art performance at the task they were trained for, but provide no principled means for transferring knowledge to other tasks. Other work such as has developed conditional generative models, which take one or a few observations from a class as input, and return a distribution over new elements p(x|D). These models may be used as classifiers despite not being explicitly trained for this purpose, by comparing conditional likelihoods. They may also be used to generate full sets incrementally as p(X) = i p(x i |x 1, . . ., x i−1), as discussed in Generative Matching Networks BID0. However, such models are a more natural fit to sequences than sets as they typically lack exchangeability, and furthermore they do not expose any latent representation of shared structure within a set. Finally are hierarchical approaches that model shared structure through latent variables, as p(X) = A VAE treats all datapoints as independent, so only a single random element need be encoded and decoded each step. A Neural Statistician instead feeds a full set of elements X through both encoder and decoder networks, in order to share a latent variable c. In a VHE, we bound the full likelihood p(X) using only random subsamples D and x for encoding/decoding. Optionally, p(x|c) may be defined through a local latent variable z. In this work we propose the Variational Homoencoder (VHE), aiming to combine several advantages of the models described above:1. Like conditional generative approaches, we train on a few-shot generation objective which matches how our model may be used at test time. However, by introducing an encoding cost, we simultaneously optimise a likelihood lower bound for a hierarchical generative model, in which structure shared across elements is made explicit by shared latent variables. 2. Previous work BID6 has learned hierarchical Bayesian models by applying Variational Autoencoders to sets, such as classes of images. However, their approach requires feeding a full set through the model per gradient step FIG1 ), rendering it intractable to train on very large sets. In practice, they avoid computational limits by sampling smaller subset as training data. In a VHE, we instead optimise a likelihood bound for the complete dataset, while constructing this bound by subsampling. This approach can not only improve generalisation, but also departs from previous work by extending to models with richer latent structure, for which the joint likelihood cannot be factorised. 3. As with a VAE, the VHE objective includes both an encoding-and reconstruction-cost. However, by sharing latent variables across a large set of elements, the encoding cost per element is reduced significantly. This facilitates use of powerful autoregressive decoders, which otherwise often suffer from ignoring latent variables BID3. We demonstrate the significance of this by applying a VHE to the Omniglot dataset. Using a Pixel-CNN decoder , our generative model is arguably the first with a general purpose architecture to both attain near human-level one-shot classification performance and produce high quality samples in one-shot generation. When dealing with latent variable models of the form p(x) = z p(z)p(x|z)dz, the integration is necessary for both learning and inference but is often intractable to compute in closed form. Variational Autoencoders provide a method for learning such models by utilising neural-network based approximate posterior inference. Specifically, a VAE comprises a generative network p(z)p(x|z) parametrised by θ, alongside a separate inference network q(z; x) parameterised by φ. These are trained jointly to maximise a single objective: DISPLAYFORM0 DISPLAYFORM1 As can be seen from Equation 1, this objective L X is a lower bound on the total log likelihood of the dataset x∈X log p(x), while q(z; x) is trained to approximate the true posterior p(z|x) as accurately as possible. If it could match this distribution exactly then the bound would be tight so that the VAE objective equals the true log likelihood of the data. In practice, the ing model is typically a compromise between two goals: pulling p towards a distribution that assigns high likelihood to the data, but also towards one which allows accurate inference by q. Equation 2 provides a formulation for the same objective which can be optimised stochastically, using Monte-Carlo integration to approximate the expectation. The Neural Statistician BID6 ) is a Variational Autoencoder in which each item to be encoded is itself a set, such as the set X (i) of all images with a particular class label i: DISPLAYFORM0 The generative model for sets, p(X), is described by introduction of a corresponding latent variable c. Given c, individual x ∈ X are conditionally independent: DISPLAYFORM1 This functional form is justified by de Finetti's theorem under the assumption that elements within in each set X are exchangeable. The likelihood is again intractable to compute, but it can be bounded below via: DISPLAYFORM2 Unfortunately, calculating the variational lower bound for each set X requires evaluating both q(c; X) and p(X|c), meaning that the entire set must be passed through both networks for each gradient update. This can easily become intractable for classes with hundreds of examples. Indeed, previous work BID6 ensures that sets used for training are always of small size by instead maximising a likelihood lower-bound for randomly sampled subsets. In this work we instead replace the variational lower-bound in Equation 5 with a new objective, itself constructed via sub-sampled datasets of reduced size. We use a constrained variational distribution q(c; D), D ⊆ X for posterior inference and an unbiased stochastic approximation log p(x|c), x ∈ X for the likelihood. In the following section we show that the ing objective can be interpreted as a lower-bound on the log-likelihood of the data. This bound will typically be loose due to stochasticity in sampling D, and we view this as a regularization strategy: we aim to learn latent representations that are quickly inferable from a small number of instances, and the VHE objective is tailored for this purpose. We would like to learn a generative model for sets X of the form DISPLAYFORM0 We will refer our full dataset as a union of disjoint sets X = X 1 X 2... X n, and use X (x) to refer to the set X i x. Using the standard consequent of Jensen's inequality, we can lower bound the log-likelihood of each set X using an arbitrary distribution q. In particular, we give q as a fixed function of arbitrary data. DISPLAYFORM1 DISPLAYFORM2 where DISPLAYFORM3 Reparametrization gradient estimate using c, z (θ, φ) ← (θ, φ) + λg Gradient step, e.g SGD until convergence of (θ, φ)Splitting up individual likelihoods, we may rewrite DISPLAYFORM4 Finally, we can replace the universal quantification with an expectation under any distribution of D (e.g. uniform sampling from X without replacement): DISPLAYFORM5 This formulation suggests a simple modification to the VAE training procedure, as shown in Algorithm 1. At each iteration we select an element x, use resampled elements D ⊂ X (x) to construct the approximate posterior q(c; D), and rescale the encoding cost appropriately. If the generative model p(x|c) also describes a separate latent variable z for each element, we may simply introduce a second inference network q(z; c, x) in order to replace the exact reconstruction error of Equation 9 by a conditional VAE bound: DISPLAYFORM6 The above derivation applies to a dataset partitioned into disjoint subsets X = X 1 X 2... X n, each with a corresponding latent variable c i. However, many datasets offer a richer organisational structure, such as the hierarchical grouping of characters into alphabets or the factorial categorisation of rendered faces by identity, pose and lighting BID16.Provided that such organisational structure is known in advance, we may generalise the training objective in FIG1 to include a separate latent variable c i for each group X i within the dataset, even when these groups overlap. To do this we first rewrite this bound in its most general form, where c collects all latent variables: DISPLAYFORM0 As shown in Figure 2, a separate D i ⊂ X i may be subsampled for inference of each latent variable DISPLAYFORM1 This leads to an analogous training objective (Equation 15), which Figure 2: Application of VHE framework to hierarchical (left) and factorial (right) models. Given an element x such that x ∈ X 1 and x ∈ X 2, an approximate posterior is constructed for the corresponding shared latent variables c 1, c 2 using subsampled sets DISPLAYFORM2 may be applied to data with factorial or hierarchical category structure. For the hierarchical case, this objective may be further modified to infer layers sequentially, as in Supplementary Material 6.2. DISPLAYFORM3 3.3 DISCUSSION As evident in Equation 9, the VHE objective provides a formal motivation for KL rescaling in the variational objective (a common technique to increase use of latent variables in VAEs) by sharing these variables across many elements. This is of particular importance when using autoregressive decoder models, for which a common failure mode is to learn a decoder p(x|z) with no dependence on the latent space, thus avoiding the encoding cost. In the context of VAEs, this particular issue has been discussed by BID3 who suggest crippling the decoder as a potential remedy. The same failure mode can occur when training a VAE for sets if the inference network q is not able to reduce its approximation error D KL [q(c; D) p(c|D)] to below the total correlation of D, either because |D| is too small, or the inference network q is too weak. Variational Homoencoders suggest a potential remedy to this, encouraging use of the latent space by reusing the same latent variables across a large set X. This allows a VHE to learn useful representations even with |D| = 1, while at the same time utilising a powerful decoder model to achieve highly accurate density estimation. In a VAE, use of a recognition network encourages learning of generative models whose structure permits accurate amortised inference. In a VHE, this recognition network takes only a small subsample as input, which additionally encourages that the true posterior p(c|X) can be well approximated from only a few examples of X. For a subsample D ⊂ X, q(c; D) is implicitly trained to minimise the KL divergence from this posterior in expectation over possible sets X consistent with D. For a data distribution p d we may equivalently describe the VHE objective (Equation 12) as DISPLAYFORM0 Note that the variational gap on the right side of this equation is itself bounded by: DISPLAYFORM1 The left inequality is tightest when p(c|X) matches p(c|D) well across all X consistent with D, and exact only when these are equal. We view this aspect of the VHE loss as regulariser for constrained posterior approximation, encouraging models for which the posterior p(c|X) can be well determined by sampled subsets D ⊂ X. This reflects how we expect the model to be used at test time, and in practice we have found this'loose' bound to perform well in our experiments. In principle, the bound may also be tightened by introducing an auxiliary inference network (see Supplementary Material 6.1) which we leave as a direction for future research. With a Neural Statistician model, under-utilisation of latent variables is expected to pose the greatest difficulty either when |D| is too small, or the inference network q is insufficiently expressive. We demonstrate on simple 1D distributions that a Variational Homoencoder can bring improvements under these circumstances. For this we created five datasets as follows, each containing 100 classes from a particular parametric family, and with 100 elements sampled from each class.1. Gaussian: Each class is Gaussian with µ drawn from a Gaussian hyperprior (fixed σ 2).2. Mixture of Gaussians: Each class is an even mixture of two Gaussian distributions with location drawn from a Gaussian hyperprior (fixed σ 2 and separation). Our show that, when |D| is small, the Neural Statistician often places little to no information in q(c; D) (FIG2, top row). Our careful training suggests that this is not an optimisation difficulty, but is core to the objective as in BID3. In these cases a VHE better utilises the latent space, leading to improvements in both few-shot generation (by conditional NLL) and classification. Importantly, this is achieved while retaining good likelihood of test-set classes, typically matching or improving upon that achieved by a Neural Statistician (including a standard VAE, corresponding to |D| = 1). Please see Supplement 6.3 for further comparison to alternative objectives. FORMULA0 ). Secondly, since the VHE introduces both data-resampling and KL-rescaling as modifications to this baseline, we separate the contributions of each using two intermediate objectives: DISPLAYFORM0 Rescale only: DISPLAYFORM1 All models were trained on a random sample of 1200 Omniglot classes using images scaled to 28x28 pixels, dynamically binarised, and augmented by 8 rotations/reflections to produce new classes. We additionally used 20 small random affine transformations to create new instances within each class. Models were optimised using Adam , and we used training error to select the best parameters from 5 independent training runs. This was necessary to ensure a fair comparison with the Neural Statistician objective, which otherwise converges to a local optimum with q(c; D) = p(c). We additionally implemented the'sample dropout' trick of , but found that this did not have any effect on model performance. Discriminative models, log q(y|x, X, Y)Siamese Networks BID15 88.1% 97.0% Matching Networks BID25 93.8% 98.7% Convnet with memory module BID13 95.0% 98.6% mAP-DLM BID24 95.4% 98.6% Model-Agnostic Meta-learning BID7 95.8% 98.9% Prototypical Networks BID23 96.0% 98.9%*Uses train/test split from TAB0 collects classification of models trained using each of the four alternative training objectives, for both architectures. When using a standard deconvolutional architecture, we find little difference in classification performance between all four training objectives, with the Neural Statistician and VHE models achieving equally high accuracy. For the hierarchical PixelCNN architecture, however, significant differences arise between training objectives. In this case, a Neural Statistician learns an strong global distribution over images but makes only minimal use of latent variables c. This means that, despite the use of a higher capacity model, classification accuracy is much poorer (66%) than that achieved using a deconvolutional architecture. For the same reason, conditional samples display an improved sharpness but are no longer identifiable to the cue images on which they were conditioned (FIG6). Our careful training suggests that this is not an optimisation difficulty but is core to the objective, as discussed in BID3.By contrast, a VHE is able to gain a large benefit from the hierarchical PixelCNN architecture, with a 3-fold reduction in classification error (5-shot accuracy 98.8%) and conditional samples which are simultaneously sharp and identifiable (FIG6). This improvement is in part achieved by increased utilisation of the latent space, due to rescaling of the KL divergence term in the objective. However, our show that this common technique is insufficient when used alone, leading to overfitting to cue images with an equally severe impairment of classification performance (accuracy 62.8%). Rather, we find that KL-rescaling and data resampling must be used together in order to for the benefit of the powerful PixelCNN architecture to be realised. TAB3 lists the classification accuracy achieved by VHEs with both |D| = 1 and |D| = 5, as compared to existing deep learning approaches. We find that both networks are not only state-of-theart amongst deep generative models, but are also competitive against the best discriminative models trained directly for few-shot classification. Unlike these discriminative models, a VHE is also able to generate new images of a character in one-shot, producing samples which are simultaneously realistic and faithful to the class of the cue image (FIG5). As our goal is to model shared structure across images, we evaluate generative performance using joint log likelihood of the entire Omniglot test set (rather than separately across images). From this perspective, a single element VAE will perform poorly as it treats all datapoints as independent, optimising a sum over log likelihoods for each element. By sharing latent variables across all elements of the same class, a VHE can improve upon this considerably. Previous work which evaluates likelihood typically uses the train/test split of BID2. However, our most appropriate comparison is with Generative Matching Networks BID0 ) as they also model dependencies within a class; thus, we trained models under the same conditions as them, using the harder test split from Lake et al. FORMULA0 with no data augmentation. We evaluate the joint log likelihood of full character classes from the test set, normalised by the number of elements, using importance weighting with k=500 samples from q(c; X). As can be seen in TAB4, our hierarchical PixelCNN architecture is able to achieve state-of-the-art log likelihood only when trained using the full Variational Homoencoder objective. 1 n log i p(x i) DRAW BID8 < 96.5 nats Conv DRAW BID9 < 91.0 nats VLAE BID3 89.83 nats 1 n log i p(x i |x 1:i−1) Variational Memory Addressing BID1 > 73.9 nats Generative Matching Networks BID0 62.42 nats 1Shared-latent models DISPLAYFORM0 Homoencoder 61.22 nats To demonstrate how the VHE framework may apply to models with richer category structure, we built both a hierarchical and a factorial VHE (Figure 2) using simple modifications to the above architectures. For the hierarchical VHE, we extended the deconvolutional model with an extra latent layer a using the same encoder and decoder architecture as c. This was used to encode alphabet level structure for the Omniglot dataset, learning a generative model for alphabets of the form DISPLAYFORM0 Again, we trained this model using a single objective, using separately resampled subsets D a and D c to infer each latent variable (Supplement 6.2). We then tested our model at both one-shot character generation and 5-shot alphabet generation, using samples from previously unseen alphabets. As shown in FIG7, our single trained model is able to learn structure at both layers of abstraction. For the factorial VHE, we extended the Omniglot dataset by assigning each image to one of 30 randomly generated styles (independent of its character class), modifying both the colour and pen stroke of each image. We then extended the PixelCNN model to include a 6-dimensional latent variable s to represent the style of an image, alongside the existing c to represent the character. We used a CNN for style encoder q(s|D s), and for each image location we condition the PixelCNN decoder using the outer product s ⊗ c ij.We then test this model on a style transfer task by feeding separate images into the character encoder q(c|D c) and style encoder q(s|D s), then rendering a new image from the inferred (c, s) pair. We find that synthesised samples are faithful to the respective character and style of both cue images FIG8, demonstrating ability of a factorial VHE to successfully disentangle these two image factors using separate latent variables. We introduced the Variational Homoencoder: a deep hierarchical generative model learned by a novel training procedure which resembles few-shot generation. This framework allows latent variables to be shared across a large number of elements in a dataset, encouraging them to be well utilised even alongside highly expressive decoder networks. We demonstrate this by training a hierarchical PixelCNN model on Omniglot dataset, and show that our novel training objective is responsible for the state-of-the-art it achieves. This model is arguably the first which uses a general purpose architecture to both attain near human-level one-shot classification performance and produce high quality samples in one-shot generation. The VHE framework extends naturally to models with richer latent structure, as we demonstrate with two examples: a hierarchical model which generalises the style of an alphabet to produce new characters, and a factorial model which separates the content and drawing style of coloured character images. In addition to these modelling extensions, our variational bound may also be tightened by learning a subsampling procedure q(D; X), or by introducing an auxiliary inference network r(D; c, X) as discussed in Supplementary Material 6.1. While such modifications were unnecessary for our experiments on Omniglot character classes, we expect that they may yield improvements on other datasets with greater intra-class diversity.6 SUPPLEMENTARY MATERIAL The likelihood lower bound in the VHE objective may also be tightened by introduction of an auxiliary network r(D; c, X), trained to infer which subset D ⊂ X was used in q. This meta-inference approach was introduced in BID21 to develop stochastic variational posteriors using MCMC inference, and has recently been applied to approximate inference evaluation . Applied to Equation 12, this yields a modified bound for the VHE objective DISPLAYFORM0 where q (D; X) describes the stochastic sampling procedure for sampling D ⊂ X, which indeed may itself be learned using policy gradients. We have conducted preliminary experiments using fixed q and a simple functional form r(D; c, X) = i r(d i ; c, X) ∝ i f ψ (c) · ξ di, learning parameters ψ and embeddings {ξ d : d ∈ X}; however, on the Omniglot dataset we found no additional benefit over the strictly loose bound (Equation 12). We attribute this to the already high similarity between elements of the same Omniglot character class, allowing the approximate posterior q(c; D) to be relatively robust to different choices of D. However, we expect that the gain from using such a tightened objective may be much greater for domains with lower intra-class similarity (e.g. natural images), and thus suggest the tightened bound of Equation 21 as a direction for future research. The resampling trick may be applied iteratively, to construct likelihood bounds over hierarchically organised data. Expanding on Equation 12, suppose that we have collection of datasets DISPLAYFORM0 For example, each X might be a different alphabet whose latent description a generates many character classes X i, and for each of these a corresponding latent c i is used to generate many images x ij. From this perspective, we would like to learn a generative model for alphabets X of the form DISPLAYFORM1 Reapplying the same trick as before yields a bound taken over all elements x: DISPLAYFORM2 This suggests an analogous hierachical resampling procedure: Summing over every element x, we can bound the log likelihood of the full hierarchy by resampling subsets D c, D a, etc. at each level to construct an approximate posterior. All networks are trained together by this single objective, sampling x, D a and D c for each gradient step. Note that this procedure need only require passing sampled elements, rather than full classes, into the upper-level encoder q a. Our architecture uses a 8x28x28 latent variable c, with a full architecture detailed below. For our classification experiments, we trained 5 models on each of the objectives (VHE, Rescale only, Resample only, NS). Occasionally we found instability in optimisation, causing sudden large increases in the training objective. When this happened, we halted and restarted training. All models were trained for 100 epochs on 1000 characters from the training set (the remaining 200 have been used as validation data for model selection). Finally, for each objective we selected the parameters achieving the best training error. Note that we did not optimise or select models based on classification performance, other than through our development of our model's architecture. However, we find that classification performance is well correlated the generative training objective, as can be seen in the full table of . We perform classification by calculating the expected conditional likelihood under the variational posterior: Eq(c;D) p(x|c). This is approximated using 20 samples for the outer expectation, and importance sampling with k = 10 for the inner integral p(x|c) = Eq(t|x) DISPLAYFORM3 To evaluate and compare log likelihood, we trained 5 more models with the same architecture, this time on the canonical 30-20 alphabet split of Lake et al. We did not augment our training data. Again, we split the set into training data (25 alphabets) and validation data but do not use the validation set in training or evaluation for our final . We estimate the total class log likelihood by importance weighting, using k=20 importance samples of the class latent c and k=10 importance samples of the transformation latent t for each instance. [d] denotes a dimension d tensor. {t} denotes a set with elements of type t. Posteriors q are Gaussian. A PixelCNN with autoregressive weights along only the spatial (not depth) dimensions of c. We use 2 layers of masked 64x3x3 convolutions, followed by a ReLU and two 8x1x1 convolutions corresponding to the mean and log variance of a Gaussian posterior for the following pixel. We extend the same architecture described in Appendix B of BID6, with only a simple modification: we introduce a new latent layer containing a 64-dimensional variable a, with a Gaussian prior. We give p(c|a) the same functional form as p(z|c), and give q(a|D a) the same functional form as q(c; D c) using the shared encoder. We created a VHE using the same deconvolutional architecture as applied to omniglot, and trained it on the Caltech-101 Silhouettes dataset. 10 object classes were held out as test data, which we use to generate both 1-shot and 5-shot conditional samples.
Technique for learning deep generative models with shared latent variables, applied to Omniglot with a PixelCNN decoder.
919
scitldr
Common-sense or knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way. Understanding natural language depends crucially on common-sense or knowledge, for example, knowledge about what concepts are expressed by the words being read (lexical knowledge), and what relations hold between these concepts (relational knowledge). As a simple illustration, if an agent needs to understand that the statement "King Farouk signed his abdication" is entailed by "King Farouk was exiled to France in 1952, after signing his resignation", it must know (among other things) that abdication means resignation of a king. In most neural natural language understanding (NLU) systems, the requisite knowledge is implicitly encoded in the models' parameters. That is, what knowledge is present has been learned from task supervision and also by pre-training word embeddings (where distributional information reliably reflects certain kinds of useful knowledge, such as semantic relatedness). However, acquisition of knowledge from static training corpora is limiting for two reasons. First, we cannot expect that all knowledge that could be important for solving an NLU task can be extracted from a limited amount of training data. Second, as the world changes, the facts that may influence how a text is understood will likewise change. In short: building suitably large corpora to capture all relevant information, and keeping the corpus and derived models up to date with changes to the world would be impractical. In this paper, we develop a new architecture for dynamically incorporating external knowledge in NLU models. Rather than relying only on static knowledge implicitly present in the training data, supplementary knowledge is retrieved from a knowledge base to assist with understanding text inputs. Since NLU systems must necessarily read and understand text inputs, our approach incorporates knowledge by repurposing this reading machinery-that is, we read the text being understood together with supplementary natural language statements that assert facts (assertions) which are relevant to understanding the content (§2).Our knowledge-augmented NLU systems operate in a series of phases. First, given the text input that the system must understand, which we call the context, a set of relevant supporting assertions is retrieved. While learning to retrieve relevant information for solving NLU tasks is an important question BID21 , inter alia), in this work, we focus on learning how to incorporate retrieved information, and use simple heuristic retrieval methods to identify plausibly relevant from an external knowledge base. Once the supplementary texts have been retrieved, we use a word embedding refinement strategy that incrementally reads the context and retrieved assertions starting with context-independent word embeddings and building successively refined embeddings of the words that ultimately reflect both the relevant supporting assertions and the input context (§3). These contextually refined word embeddings, which serve as dynamic memory to store newly incorporated knowledge, are used in any task-specific reading architecture. The overall architecture is illustrated in FIG0. Although we are incorporating a new kind of information into the NLU pipeline, a strength of this approach is that the architecture of the reading module is independent of the final NLU task-the only requirement is that the final architecture use word embeddings. We carry out experiments on several different datasets on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) evaluating the impact of our proposed solution with both basic task architectures and a sophisticated task architecture for RTE (§4). We find that our embedding refinement strategy is quite effective (§5). On four standard benchmarks, we show that refinement helps-even refining the embeddings just using the context (and no additional information) can improve performance significantly, and adding knowledge helps further. Our are very competitive, setting a new state-of-the-art on the recent TriviaQA benchmarks which is remarkable considering the simplicity of the chosen task-specific architecture. Finally, we provide a detailed analysis of how knowledge is being used by an RTE system (§6), including experiments showing that our system is capable of making appropriate counterfactual inferences when provided with "false knowledge". Knowledge resources make information that could potentially be useful for improving NLU available in a variety different formats, such as (subject, predicate, object)-triples, relational databases, and other structured formats. Rather than tailoring our solution to a particular structured representation, we assume that all supplementary information either already exists in natural language statements or can easily be recoded as natural language. In contrast to mapping from unstructured to structured representations, the inverse problem is not terribly difficult. For example, given a triple (monkey, isA, animal) we can construct the free-text assertion "a monkey is an animal" using a few simple rules. Finally, the free-text format means that knowledge that exists only in unstructured text form is usable by our system. A major question that remains to be answered is: given some text that is to be understood, what supplementary knowledge should be incorporated? The retrieval of contextually relevant information from knowledge sources is a complex research topic by itself, and it is likewise crucially dependent on the format of the underlying knowledge base. There are several statistical BID16 and more recently neural approaches BID19 and approaches based on reinforcement learning BID21. In this work we make use of a simple heuristic from which we almost exhaustively retrieve all potentially relevant assertions (see §4), and rely on our reading architecture to learn to extract only relevant information. In the next section, we turn to the question of how to leverage the retrieved supplementary knowledge (encoded as text) in a NLU system. In order to incorporate information from retrieved input texts we propose to compute contextually refined word representations prior to processing the NLU task at hand and pass them to the task in the form of word embeddings. Word embeddings thus serve as a form of memory that not only contains general-purpose knowledge (as in typical neural NLU systems) but also contextual information (including retrieved knowledge). Our incremental refinement process encodes input texts followed by updates on the word embedding matrix using the encoded input in multiple reading steps. Words are first represented non-contextually (i.e., standard word type embeddings), which can be conceived of as the columns in an embedding matrix E 0. At each progressive reading step ≥ 1, a new embedding matrix E is constructed by refining the embeddings from the previous step E −1 using (user-specified) contextual information X for reading step, which is a set of natural language sequences (i.e., texts). An illustration of our incremental refinement strategy can be found in FIG0. In the following, we define this procedure formally. We denote the hidden dimensionality of our model by n and a fully-connected layer by DISPLAYFORM0 The first representation level consists of non-contextual word representations, that is, word representations that do not depend on any input; these can be conceived of as an embedding matrix E 0 whose columns are indexed by words in Σ *. The non-contextual word representation e 0 w for a single word w is computed by using a gated combination of fixed, pre-trained word vectors e p w ∈ R n with learned character-based embeddings e char w ∈ R n. The formal definition of this combination is given in Eq. 1: DISPLAYFORM1 We compute e char w using a single-layer convolutional neural network using n convolutional filters of width 5 followed by a max-pooling operation over time. Combining pre-trained with character based word embeddings in such a way is common practice. Our approach follows BID28 BID31. In order to compute contextually refined word embeddings E given prior representations E −1 we assume a given set of texts X = {x 1, x 2, . . .} that are to be read at refinement iteration. Each text x i is a sequence of word tokens. We embed all tokens of every x i using the embedding matrix from the previous layer, E −1. To each word, we concatenate a one-hot vector of length L with position set to 1, indicating which layer is currently being processed.1 Stacking the vectors into a matrix, we obtain a X i ∈ R d×|x i |. This matrix is processed by a bidirectional recurrent neural network, a BiLSTM BID8 in this work. The ing output is further projected toX i by a fully-connected layer followed by a ReLU non-linearity (Eq. 2). DISPLAYFORM0 To finally update the previous embedding e −1 w of word w, we initially maxpool all representations of occurrences matching the lemma of w in every x ∈ X ing inê w (Eq. 3). Finally, we combine the context-independent representation e w withê w to form a context-sensitive representation e w via a gated addition which lets the model determine how much to revise the embedding with the newly read information (Eq. 5). DISPLAYFORM1 e w = u w e −1 w DISPLAYFORM2 Note that we soften the matching condition for w using lemmatization, lemma(w), during the pooling operation of Eq. 3 because contextual information about certain words is usually independent of the current word form w they appear in. As a consequence, this minor linguistic pre-processing step allows for additional interaction between tokens of the same lemma. The important difference between our contextual refinement step and conventional multi-layer (RNN) architectures is the pooling operation that is performed over occurrences of tokens that share the same lemma. This effectively connects different positions within and between different texts with each other thereby mitigating the problems arising from long-distance dependencies. More importantly, however, it allows models to make use of additional input such as relevant knowledge. We run experiments on four benchmarks for two popular tasks, namely recognizing textual entailment (RTE) and document question answering (DQA). In the following we describe different aspects of our experimental setup in more detail. Task-specific Models Our primary interest is to explore the value of our refinement strategy with relatively generic task architectures. Therefore, we chose basic single-layer bidirectional LSTMs (BiLSTMs) as encoders with a task-specific, feed-forward neural network on top for the final prediction. Such models are common baselines for NLU tasks and can be considered general reading architectures as opposed to the more highly tuned, task-specific NLU systems that are necessary to achieve state-of-the art . However, since such models frequently underperform more customized architectures, we also add our refinement module to a reimplementation of a state-of-the-art architecture for RTE called ESIM BID4 ).All models are trained end-to-end jointly with the refinement module. For the DQA baseline system we add a simple lemma-in-question feature (liq) as suggested in BID31 when encoding the context to compare against competitive baseline . We provide the exact model implementations for our BiLSTM baselines and general training details in Appendix A.Question Answering We apply our DQA models on 2 recent DQA benchmark datasets, SQuAD BID25 and TriviaQA BID11. The task is to predict an answer span within a provided document p given a question q. Both datasets are large-scale, containing on the order of 100k examples. Because TriviaQA is collected via distant supervision the test set is divided into a large but noisy distant supervision part and a much smaller (on the order of hundreds) human verified part. We report on both. See Appendix A.1 for implementation details. Recognizing Textual Entailment We test on the frequently used SNLI dataset BID2, a collection of 570k sentence pairs, and the more recent MultiNLI dataset (433k sentence pairs) BID32. Given two sentences, a premise p and a hypothesis q, the task is to determine whether p either entails, contradicts or is neutral to q. See Appendix A.2 for implementation details. Knowledge Source We make use of ConceptNet 2 BID29, a freely-available, multi-lingual semantic network that originated from the Open Mind Common Sense project and incorporates selected knowledge from various other knowledge sources, such as Wiktionary 3, Open Multilingual WordNet 4, OpenCyc and DBpedia 5. It presents information in the form of relational triples. Assertion Retrieval We would like to obtain information about the relations of words and phrases between q and p from ConceptNet in order to strengthen the connection between the two sequences. Because assertions a in ConceptNet come in form of (subject, predicate, object)-triples (s, r, o), we retrieve all assertions for which s appears in q and o appears in p, or vice versa. Because still too many such assertions might be retrieved for an instance, we rank all retrievals based on their respective subject and object. To this end we compute a ranking score which is the inverse product of appearances of the subject and the object in the KB, that is score(a) = DISPLAYFORM0 −1, where I denotes the indicator function. This is very related to the popular idf score (inverted document frequency) from information retrieval which ranks terms higher that appear less frequently across different documents. During training and evaluation we only retain the top-k assertions which we specify for the individual experiments separately. Note that (although very rarely) it might happen that no assertions are retrieved at all. Refinement Order When employing our embedding-refinement strategy, we first read the document (p) followed by the question (q) in case of DQA, and the premise (p) followed by the hypothesis (q) for RTE, that is, X 1 = {p} and X 2 = {q}. Additional knowledge in the form of a set of assertions A is integrated after reading the task-specific input for both DQA and RTE, that is, X 3 = A. In preliminary experiments we found that the final performance is not significantly sensitive to the order of presentation so we decided to fix our order as defined above. Table 1 presents our on two question answering benchmarks. We report on the SQuAD development set 7 and the two more challenging TriviaQA test sets which demonstrate that the introduction of our reading architecture helps consistently with additional gains from using knowledge. Our systems do even outperform current state-of-the-art models on TriviaQA which is surprising given the simplicity of our task-specific architecture and the complexity of the others. For instance, the system of BID9 uses a complex multi-hop attention mechanism to achieve their . Even our baseline BiLSTM + liq system reaches very competitive on TriviaQA which is in line with findings of BID31. To verify that it is not the additional computation that gives the performance boosts when using only our reading architecture (without knowledge), we also ran experiments with 2-layer BiLSTMs (+liq) for our baselines which exhibit similar computational complexity to BiLSTM + reading. We found that the second layer even hurts performance. This demonstrates that pooling over word/lemma occurrences in a given context between layers, which constitutes the main difference to conventional stacked RNNs, is a powerful, yet simple technique. In any case, the most important finding of these experiments is that knowledge actually helps considerably with up to 2.2/2.9% improvements on the F1/Exact measures. TAB3 shows the of our RTE experiments. In general, the introduction of our refinement strategy almost always helps, both with and without external knowledge. When providing additional knowledge from ConceptNet, our BiLSTM based models improve substantially, while the ESIM-based models improve only on the more difficult MultiNLI dataset. Compared to previously published state-of-the-art systems, our models acquit themselves very well on the MultiNLI benchmark, and competitively on the SNLI benchmark. In parallel to this work, BID6 developed a novel task-specific architecture for RTE that achieves slightly better performance on MultiNLI than our ESIM+reading+knowledge based models. 8 It is worth observing that with our knowledge-enhanced embedding architecture, our generic BiLSTM-based task model outperforms ESIM on MultiNLI, which is architecturally much more complex and designed specifically for the RTE task. Finally, we remark that despite careful tuning, our re-implementation of ESIM fails to 2 http://conceptnet.io/ 3 http://wiktionary.org/ 4 http://compling.hss.ntu.edu.sg/omw/ 5 http://dbpedia.org/ 6 We exclude ConceptNet 4 assertions created by only one contributor and from Verbosity to reduce noise. 7 Due to restrictions on code sharing, we are not able to use the public evaluation server to obtain test set scores for SQuAD. However, for the remaining tasks we report both development accuracy and held-out test set performance.8 Our reading+knowledge refinement architecture can be used of course with this new model. Table 3: Development set when reducing training data and embedding dimsensionality with PCA. In parenthesis we report the relative differences to the respective directly above. match the 88% reported in BID4; however, with MultiNLI, we find that our implementation of ESIM performs considerably better (by approximately 5%). The instability of the suggests, as well as the failure of a custom RTE-architecture to consistently perform well suggests that current SotA RTE models may be overfit to the SNLI dataset. We find that there is only little impact when using external knowledge on the RTE task when using a more sophisticated task model such as ESIM. We hypothesize that the attention mechanisms within ESIM jointly with powerful, pre-trained word representations allow for the recovery of some important lexical relations when trained on a large dataset. It follows that by reducing the number of training data and impoverishing pre-trained word representations the impact of using external knowledge should become larger. To test this hypothesis, we gradually impoverish pre-trained word embeddings by reducing their dimensionality with PCA while reducing the number of training instances at the same time. 9 Our joint data and dimensionality reduction are presented in Table 3. They show that there is indeed a slightly larger benefit when employing knowledge in the more impoverished settings with largest improvements over using only the novel reading architecture when using around 10k examples and reduced dimensionality to 10. However, we observe that the biggest overall impact over the baseline ESIM model stems from our contextual refinement strategy (reading) which is especially pronounced for the 1k and 3k experiments. This highlights once more the usefulness of our refinement strategy even without the use of additional knowledge. Table 4: Three examples for the antonym ↔ synonym swapping experiment on MultiNLI. p-premise, h-hypothesis, a-assertion,ā-swapped assertion. Figure 2: Performance differences when ignoring certain types of knowledge, i.e., relation predicates during evaluation. Normalized performance differences are measured on the subset of examples for which an assertion of the respective relation predicate occurs. Is additional knowledge used? To verify whether and how our models make use of additional knowledge, we conducted several experiments. First, we evaluated models trained with knowledge on our tasks while not providing any knowledge at test time. This ablation drops performance by 3.7-3.9% accuracy on MultiNLI, and by 4% F1 on SQuAD. This indicates the model is refining the representations using the provided assertions in a useful way. Are models sensitive to semantics of the provided knowledge? The previous does not show that the models utilize the provided assertions in any consistent way (it may just reflect a mismatch of training and testing conditions). Therefore, to test our models sensitivity towards the semantics of the assertions, we run an experiment in which we swap the synonym with the antonym predicate in the provided assertions during test time. Because of our heuristic retrieval mechanism, not all such "counterfactuals" will affect the truth of the inference, but we still expect to see a more significant impact. The performance drop on MultiNLI examples for which either a synonym or an antonym-assertion is retrieved is about 10% for both the BiLSTM and the ESIM model. This very large drop clearly shows that our models are sensitive to the semantics of the provided knowledge. Examples of prediction changes are presented in Table 4. They demonstrate that the system has learned to trust the presented assertions to the point that it will make appropriate counterfactual inferences-that is, the change in knowledge has caused the change in prediction. What knowledge is used? After establishing that our models are somehow sensitive to semantics we wanted to find out which type of knowledge is important for which task. For this analysis we exclude assertions including the most prominent predicates in our knowledge base individually when evaluating our models. The are presented in Figure 2. They demonstrate that the biggest performance drop in total (blue bars) stems from related to assertions. This very prominent predicate appears much more frequently than other assertions and helps connecting related parts of the 2 input sequences with each other. We believe that related to assertions offer benefits mainly from a modeling perspective by strongly connecting the input sequences with each other and thus bridging long-range dependencies similar to attention. Looking at the relative drops obtained by normalizing the performance differences on the actually affected examples (green) we find that our models depend highly on the presence of antonym and synonym assertions for all tasks as well as partially on is a and derived from assertions. This is an interesting finding which shows that the sensitivity of our models is selective wrt. the type of knowledge and task. The fact that the largest relative impact stems from antonyms is very interesting because it is known that such information is hard to capture with distributional semantics contained in pre-trained word embeddings. The role of knowledge in natural language understanding has long been remarked on, especially in the context of classical models of AI BID27 BID18; however, it has only recently begun to play a role in neural network models of NLU BID0 BID34 BID15 BID5. However, previous efforts have focused on specific tasks or certain kinds of knowledge, whereas we take a step towards a more general-purpose solution for the integration of heterogeneous knowledge for NLU systems by providing a simple, general-purpose reading architecture that can read knowledge encoded in simple natural language statements, e.g., "abdication is a type of resignation". BID1 use textual word definitions as a source of information about the embeddings of OOV words. In the area of visual question answering BID33 utilize external knowledge in form of DBpedia comments (short abstracts/definitions) to improve the answering ability of a model. BID17 explicitly incorporate knowledge graphs into an image classification model. BID34 created a recall mechanism into a standard LSTM cell that retrieves pieces of external knowledge encoded by a single representation for a conversation model. Concurrently, BID5 exploit linguistic knowledge using MAGE-GRUs, an adapation of GRUs to handle graphs, however, external knowledge has to be present in form of triples. The main difference to our approach is that we incorporate external knowledge in free text form on the word level prior to processing the task at hand which constitutes a more flexible setup. BID0 exploit knowledge base facts about mentioned entities for neural language models. BID1 and BID15 create word embeddings on-the-fly by reading word definitions prior to processing the task at hand. BID24 seamlessly incorporate information about word senses into their representations before solving the downstream NLU task, which is similar. We go one step further by seamlessly integrating all kinds of fine-grained assertions about concepts that might be relevant for the task at hand. Another important aspect of our approach is the notion of dynamically updating wordrepresentations. Tracking and updating concepts, entities or sentences with dynamic memories is a very active research direction BID14 BID7 BID10 BID13. However, those works typically focus on particular tasks whereas our approach is taskagnostic and most importantly allows for the integration of external knowledge. Other related work includes storing temporary information in weight matrices instead of explicit neural activations (such as word representations) as a biologically more plausible alternative. We have presented a novel task-agnostic reading architecture that allows for the dynamic integration of knowledge into neural NLU models. Our solution, which is based on the incremental refinement of word representations by reading supplementary inputs, is flexible and be used with virtually any existing NLU architecture that rely on word embeddings as input. Our show that embedding refinement using both the system's text inputs, as well as supplementary texts encoding knowledge can yield large improvements. In particular, we have shown that relatively simple task architectures (e.g., based on simple BiLSTM readers) can become competitive with state-of-the-art, task-specific architectures when augmented with our reading architecture. In the following we explain the detailed implementation of our two task-specific, baseline models. We assume to have computed the contextually (un-)refined word representations depending on the setup and embedded our input sequences q = (q 1, ..., q L Q) and p = (p 1, ..., p L P) to Q ∈ R n×L Q and P ∈ R n×L P, respectively. The word representation update gate in Eq. 4 is initialized with a bias of 1 to refine representations only slightly in the beginning of training. In the following as before, we denote the hidden dimensionality of our model by n and a fully-connected layer by DISPLAYFORM0 A.1 QUESTION ANSWERING Encoding In the DQA task q refers to the question and p to the supporting text. At first we process both sequences by identical BiLSTMs in parallel (Eq. 6) followed by separate linear projections (Eq. 7).Q = BiLSTM(Q)P = BiLSTM(P)Q ∈ R 2n×L Q,P ∈ R 2n×L P DISPLAYFORM1 U Q, U P ∈ R n×2n are initialized by [I; I] where I ∈ R n×n is the identity matrix. Prediction Our prediction-or answer layer is the same as in BID31. We first compute a weighted, n-dimensional representationq of the processed questionQ (Eq. 8). DISPLAYFORM2 The probability distribution p s /p e for the start/end location of the answer is computed by a 2-layer MLP with a ReLU activated, hidden layer s j as follows: DISPLAYFORM3 The model is trained to minimize the cross-entropy loss of the predicted start and end positions, respectively. During evaluation we extract the span (i, k) with the best span-score p s (i) · p e (k) of maximum token length k − i = 16. Encoding Analogous to DQA we encode our input sequences by BiLSTMs, however, for RTE we use conditional encoding BID26 instead. Therefore, we initially process the embedded hypothesis Q by a BiLSTM and use the respective end states of the forward and backward LSTM as initial states for the forward and backward LSTM that processes the embedded premise P.Prediction We concatenate the outputs of the forward and backward LSTMs processing the premise p, i.e., p f w t;p bw t ∈ R 2n and run each of the ing L P outputs through a fullyconnected layer with ReLU activation (h t) followed by a max-pooling operation over time ing in a hidden state h ∈ R n. Finally, h is used to predict the RTE label as follows: DISPLAYFORM0 The probability of choosing category c ∈ {entailment, contradiction, neutral} is defined in Eq. 10. Finally, the model is trained to minimize the cross-entropy loss of the predicted category probability distribution p. As pre-processing steps we lowercase all inputs and tokenize it. Additionally, we make use of lemmatization as described §3.2 in which is necessary for matching. As pre-trained word representations we use 300-dimensional word-embeddings from Glove BID23. We employed ADAM BID12 for optimization with an initial learning-rate of 10 −3 which was halved whenever the F1 measure (DQA) or the accuracy (RTE) dropped on the development set between 1000/2000 minibatches for DQA and RTE respectively. We used mini-batches of size 16 for DQA and 64 for RTE. Additionally, for regularization we make use of dropout with a rate of 0.2 on the computed non-contextual word representations e w defined in §3.1 with the same dropout mask for all words in a batch. All our models were trained with 3 different random seeds and the top performance is reported.
In this paper we present a task-agnostic reading architecture for the dynamic integration of explicit background knowledge in neural NLU models.
920
scitldr
Mixed-precision arithmetic combining both single- and half-precision operands in the same operation have been successfully applied to train deep neural networks. Despite the advantages of mixed-precision arithmetic in terms of reducing the need for key resources like memory bandwidth or register file size, it has a limited capacity for diminishing computing costs and requires 32 bits to represent its output operands. This paper proposes two approaches to replace mixed-precision for half-precision arithmetic during a large portion of the training. The first approach achieves accuracy ratios slightly slower than the state-of-the-art by using half-precision arithmetic during more than 99% of training. The second approach reaches the same accuracy as the state-of-the-art by dynamically switching between half- and mixed-precision arithmetic during training. It uses half-precision during more than 94% of the training process. This paper is the first in demonstrating that half-precision can be used for a very large portion of DNNs training and still reach state-of-the-art accuracy. The use of Deep Neural Networks (DNNs) is becoming ubiquitous in areas like computer vision , speech recognition, or language translation . DNNs display very remarkable pattern detection capacities and, more specifically, Convolutional Neural Networks (CNNs) are able to accurately detect and classify objects over very large image sets . Despite this success, a large amount of samples must be exposed to the model for tens or even hundreds of times during training until an acceptable accuracy threshold is reached, which drives up training costs in terms of resources like memory storage or computing time. To mitigate these very large training costs, approaches based on data representation formats simpler than the Floating Point 32-bit (FP32) standard have been proposed . These approaches successfully mitigate the enormous training costs of DNNs by using data representation formats that either reduce computing costs or diminish the requirements in terms of memory storage and bandwidth. In particular, some of these proposals have shown the benefits of combining half-precision and single-precision compute during training in terms of keeping model accuracy and reducing compute and memory costs . These approaches accelerate linear algebra operations by accumulating half-precision input operands to generate 32-bit outputs. While this mixed-precision (MP) arithmetic can successfully reduce the use of resources like memory bandwidth or hardware components like register file size, it has a very limited capacity for diminishing computing costs and it is unable to reduce output data size. In this paper we propose new training methodologies able to exclusively use half-precision for a large part of the training process, which constitutes a very significant improvement over mixedprecision approaches in terms of compute and memory bandwidth requirements. We propose two different approaches, the first one statically assigns either the Brain Floating Point 16-bit (BF16) or the FP32 format to the model parameters involved in the training process, while the second dynamically switches between BF16 and MP during training depending on its progress. Our approaches do not require mixed-precision arithmetic while computing linear algebra operations for a large portion of the training process, which enables them to deliver the same performance as if they were operating with half-precision arithmetic during the whole training while providing the same model accuracy as if FP32 was used. This paper is the first in demonstrating that half-precision can be extensively used during DNNs training without the need for mixed-precision arithmetic. We made our code available 1. Mixed-Precision training has been extensively explored in recent years. Approaches mixing Floating Point 16-bit (FP16) and FP32 datatypes have been proposed . In these approaches, multiplications of FP16 parameters are accumulated in FP32 registers to minimize data representation range and precision issues. Importantly, relevant phases of the training process like computing weight updates (WU) or dealing with batch normalization (BN) layers entirely use FP32, which implies that a FP32 representation of network weights and biases is kept during the whole training. This approach requires some additional computations to enforce that FP32 values are converted to FP16 without data representation range issues. This approach is used by Nvidia Tesla V100 GPUs via mixed-precision computing units called tensor cores, which are able to multiply FP16 parameters and store the in FP32. Figure 1a displays the most fundamental operation of this approach combining FP16 and FP32, the mixed-precision Fused Multiply-Add (FMA) instruction, which computes D = A · B + C. Input parameters A and B are represented in the FP16 format. The of the A · B operation is kept in FP32 and added to the C parameter, which is represented in FP32 as well. The final output D is also represented in FP32. FMA instructions constitute around 60% of the whole training workload for several relevant CNN models, as Section 3 shows. A more recent approach proposes mixed-precision arithmetic combining BF16 and FP32 . It is very close to its FP16 counterpart with the exception of the full to half precision conversion. Since BF16 has the same data representation range as FP32, conversion from full to half precision is very simple in this case since it just requires applying the Round to Nearest Even (RNE) technique. This approach also processes WU and BN layers with FP32. Figure 1b shows a representation of a mixed-precision FMA combining BF16 and FP32. It is very close to the previously described FP16-FP32 FMA with the only difference being the data representation format of input parameters A and B. While mixed-precision FMA instructions bring significant benefits since they require less memory bandwidth and register storage than FP32 FMAs, there is still a large margin for improvement if an entirely BF16 FMA like the one represented in Figure 1c could be extensively used for training purposes. First, since BF16 FMA requires exactly one half of the register storage of a FP32 FMA, it doubles its Single Instruction Multiple Data (SIMD) vectorization capacity and, therefore, it may significantly increase its FMA instructions per cycle ratio. Extensions of Instruction Set Architectures (ISA) to allow SIMD parallelism are becoming a key element for floating-point performance, which has motivated major hardware vendors to include them in their products . Finally, BF16 FMA instructions also bring significant reductions in terms of memory bandwidth since they involve 50% and 25% less data than FP32 and MP FMAs, respectively. While half-precision arithmetic has not been used to train DNNs due to its lack of training convergence, this paper describes two techniques to fully use it while keeping the same convergence properties as FP32. This paper analyzes in detail 3 relevant training workloads in Section 3, and applies these findings to build its two main contributions in Section 4. We consider three CNN models: AlexNet, Inception V2, and ResNet-50. Section 5 describes the exact way we use these models and the methodology we follow to analyze their training workload. Figure 2a shows an instruction breakdown of processing one batch on these networks. This figure shows how floating-point instructions constitute a large portion of these workloads. For example, they represent 58.44% of the total in the case of AlexNet. A very large portion of these floatingpoint instructions, 57.42% of the total, are FMA instructions. In the cases of Inception and ResNet-.95% of the total, respectively. Therefore, FMA instructions constitute a large portion of the whole training workload, while other FP32 instructions represent a small instruction count that remain below 1.1% for these three CNNs. This justifies to focus on FMA instructions, as executing them in half-precision has a large potential for performance improvement. Prior research describes the need for using 32-bit arithmetic in Weight Updates (WU) and Batch Normalization (BN) layers when using training approaches based on mixed-precision arithmetic. We run an experimental campaign to confirm this observation and to measure the number of instructions devoted to WU and BN. For the case of ResNet-50, this instruction count is around 30 million instructions per batch, that is, just 0.04% of the FP instructions. AlexNet and Inception V2 produce similar . In , reducing the cost of FMA instructions has a high potential for very significant performance improvements even if WU and BN layers are computed using full precision arithmetic. Processing one training batch for the cases of AlexNet, Inception and ResNet-50 requires running 53.3, 37.2, and 70.0 billion dynamic instructions per batch, respectively. The number of model parameters drives the size of these workloads. AlexNet was trained with a batch size of 256, while Inception and ResNet use a batch size of 64. We propose two training methodologies that rely exclusively on half-precision BF16 for a large portion of the training process, i.e., a large portion of FMA instructions. Prior mixed-precision approaches preclude large gains in computing costs as some of the data elements remain in FP32. However, an FMA entirely relying on BF16 can potentially double the SIMD vectorization throughput of current processors and alleviate memory bandwidth requirements. We first propose a scheme that performs all FMA instructions in BF16 (see Figure 1c) except those involved in computing WU and processing BN layers, which are entirely performed in FP32. While this method might not deliver the desired level of accuracy for all CNNs, Section 6 shows how it behaves remarkably well for the Inception V2 model, since it obtains the same level of accuracy as state-of-the training using MP and FP32. However, some CNNs cannot entirely rely on half-precision arithmetic during training. For example, Figure 2b shows top1 accuracy achieved by three training techniques during 15 epochs for ResNet-50. The first technique (referred as FP32 in Figure 2b) entirely relies in FP32 arithmetic, the second approach (referred as MP in Figure 2b) represents the state-of-the art Mixed-Precision training , and the third approach (referred as BF16 in Figure 2b) performs all FMA instructions in BF16 except for WU and BN. While the BF16 approach behaves relatively well, it displays lower accuracy than MP and FP32 for all the epochs, which indicates the need for an approach able to take advantage of BF16 arithmetic while delivering the same accuracy as mixed-or full-precision approaches. The methodology we use to generate Figure 2b is described in Section 5. Our second contribution dynamically switches between MP and BF16 to deliver the same accuracy as MP while relying in BF16 FMAs during a large portion of the training process. Algorithm 1 displays a high level pseudo-code of our proposal. It starts the training process using the state-of-the art mixed-precision approach for several batches, defined by numBatchesM P parameter. Then, it computes the Exponential Moving Average (EMA) of the training loss and, if its reduction is larger than a certain threshold (emaT hreshold parameter), it computes the next numBatchesBF 16 using BF16 FMAs, except for WU and BN. Once training has gone through these numBatchesBF 16 batches, our algorithm checks EMA and compares its reduction with the emaT hreshold parameter. If this reduction is not large enough, the algorithm switches back to MP arithmetic. Otherwise, it keeps using BF16 arithmetic for numBatchesBF 16 batches before checking EMA again. Our experiments are performed on Intel Xeon Platinum 8160 processors, which include the AVX512 ISA. We use the Intel-Caffe (Intel, a) framework (version 1.1.6a). We use the Intel MKLDNN (Intel, c) (version 0.18.0) Deep Neural Network library and the Intel MKL library (Intel, b) (version 2019.0. 3) to run numerical kernels since both libraries are optimized to perform well on our testing infrastructure. Finally, to define and run the experiments we use the pyCaffe python interface, which takes care of loading the data and orchestrating the execution. Due to the lack of available hardware implementing the BF16 numerical format, we rely on an emulation technique to perform our experiments. Several approaches have been used in the past to emulate the behaviour of reduced floating-point representations, most notably via libraries that perform transformations like truncation and rounding (; Dawson & Düben, 2017;). We develop a binary analysis tool based on PIN 3.7 . Our tool captures and instruments dynamic instructions, which enables adaptating numerical operands to the targeted numerical data format. Our approach seamlessly works on complex frameworks like PyTorch, Tensorflow, or Caffe, with interpreted languages, and is able to instrument instructions triggered by dynamically linked libraries. Our binary analysis tool performs the following steps: • It checks the current operation mode, which can be FP32, MP, or BF16 (see Figure 1). • It checks the current execution routine to determine if we are executing routines that belong to WU or BN layers. If that is the case, computation proceeds with FP32. • The tool intercepts the dynamic instructions of the workload and detects all floating-point operations, including FMAs. For each FMA instruction, operands that need to be rounded to BF16, depending on the current operation mode, are rounded using the RNE algorithm. countBatchesBF 16 ← 0 • The tool can dynamically change its operation mode anytime via a simple inter-process communication method that can be invoked from the python high-level interface. To mitigate the overhead of our binary analysis tool, we implement two optimizations: First, we vectorize the truncation and rounding routines via AVX512 instructions. Second, we avoid redundant rounding and truncation operations by identifying instructions belonging to the same basic block sharing some input operands already stored in the register file. These two optimizations reduce the overhead of the tool from 100× to 25× with respect to native runs of the binary on real hardware. This paper considers two different types of training techniques: static schemes and dynamic schemes. When using static schemes, the training procedure uses the same data representation form for a given parameter during its complete execution. For example, the three techniques displayed in Figure 2b are static. We define the following schemes: • MP: FMA instructions belonging to WU and BN layers always use FP32 precision. The remaining FMA instructions use the mixed-precision approach represented in Figure 1b ). This scheme replicates prior work on mixed-precision . • BF16: FMA instructions belonging to WU and BN layers always use FP32 precision. The remaining FMA instructions use BF16 operands to multiply and to accumulate (Figure 1c). The BF16 method is the first contribution of this paper. It extensively uses half-precision arithmetic while displaying good convergence properties. The Dynamic scheme we propose in this paper switches between the MP and BF16 static techniques during training, as explained in Section 4 and detailed in Algorithm 1. This dynamic method im-proves the training convergence properties of BF16 while still relying in half-precision FMAs for a very large portion of the execution. The EMA threshold (emaTreshold) is set at 4%. This value is computed as the average EMA reduction when using FP32 computations. The minimum number of batches to be performed in BF16, defined by the numBatchesBF 16 parameter is set to 1,000, which precludes frequent unnecessary transitions between the two schemes. We set the numBatchesM P parameter to 10, which keeps the number of batches using the MP regime low while keeping its benefits in terms of convergence. To evaluate our proposals we consider the AlexNet , Inception V2 and ResNet50 (b) models. They are representative CNN state-of-the-art. We use the ImageNet database as training input. To keep execution times manageable when using our binary instrumentation tool, we run the experiments using a reduced ImageNet Database, similar to the Tiny ImageNet Visual Recognition challenge data set (Fei-Fei). Therefore, we use 256,000 images divided into 200 categories for training, and 10,000 images for validation. The images have no modifications in terms of the size. All the evaluated CNN models remain unmodified, the only change is loading a reduced dataset. AlexNet is selected due to its simplicity in terms of structure and amount of required computations. To train AlexNet we consider a batch size of 256 and the base learning rate is 0.01, which is adjusted every 20 epochs taking into account a weight decay of 0.0005 and a momentum of 0.9. This model is trained for 32 epochs. We use Inception because it is a model conceived to reduce computational costs via cheap 1x1 convolutions. To train it we use a batch size of 64 and a base learning rate of 0.045, which is updated every 427 steps (0.11 epochs). The gamma, momentum and weight decay are set to 0.96, 0.9, and 0.0002, respectively. The training process is executed for 16 epochs. Finally we use ResNet-50. It is a network that delivers good accuracy and avoids the vanishing gradients issue by using residual blocks and the MSRA initializer (a). We train it using a multi-step approach. The batch size is 64 and the base learning rate is 0.05, which is updated every 30 epochs. The gamma hyperparameter, momentum value, and weight decay are set to 0.1, 0.9, and 0.0001, respectively. The training process runs for a total of 32 epochs. Figure 3 and Table 1 show from our evaluation campaign. The x-axis of the three plots belonging to Figure 3 represent the epochs of the training process while the y-axis represents the accuracy reached by the model over the validation set. Table 1 shows the test accuracy we reach for the three network models when using the FP32 and MP baselines and our two contributions: BF16 and Dynamic. The AlexNet model, due to its structure, shows a good response when lower precision numerical data types are used. As can be seen in Figure 3a all techniques converge, although the BF16 approach shows the worse accuracy when compared to the Dynamic or the MP techniques. Table 1 shows that FP32, MP, Dynamic, and BF16 reach top-5 accuracies of 84.50%, 84.43%, 84.02% and 82.56% for AlexNet after 32 epochs. Importantly, Dynamic reaches the same accuracy as FP32 and MP while using the BF16 approach for 94.60% of the FMAs. In contrast, the BF16 static technique does 99.93% of the FMAs in full BF16 precision (0.07% are in WU and BN layers), but the accuracy drops by almost 3% in Top-1 and 2% in Top-5. This drop in accuracy happens just by doing an additional 5% BF16 FMAs. This give us some space to improve the Dynamic approach by reducing the percentage of BF16 FMAs with the objective to increase the accuracy of the model. Figure 3b shows the validation accuracy during 16 epochs for the Inception V2 model. It shows fluctuation on the accuracy evaluation during training due to its structure and hyperparameters tuning. Dynamic responds in a robust way to these changes, which highlights its general applicability. Table 1 shows that FP32, MP, Dynamic, and BF16 reach top-5 accuracies of 93.36%, 92.67%, 92.02%, and 92.05% for Inception V2 after 16 epochs. Finally, the evaluation on ResNet-50 demonstrates that the Dynamic approach is effective when applied to deeper CNNs. In this case, the precision of the model reaches state-of-the-art levels while using half-precision for 96.4% of the FMA instructions. Figure 3c and Table 1 display the exact accuracy numbers we get from our evaluation after 32 epochs. In this experiment the Top-1 accuracy drops just 1.2% comparing the BF16 and dynamic approaches, however we could improve the dynamic technique relaxing the quantity of BF16 FMA executed to gain more accuracy. We provide a sensitivity analysis for the parameters employed in Algorithm 1. The objective is to show that for a range of reasonable parameters the algorithm behaves as expected. To do this analysis we set one of the parameters to the currently used value (numBatchesMP to 10) to have a manageable number of combinations. We then test all the possible combinations using numBatchesBF16 = {500, 1000, 2000} and emaThreshold = {0.02, 0.04, 0.08}, that is, a total of 9 different combinations. As stated in Section 5.3, during our evaluation we used the configuration {numBatchesMP, numBatchesBF16, emaThreshold} = {10, 1000, 0.04} for all the evaluated networks. Figure 4 shows, for a number of ResNet-50 epochs, the accuracy obtained for each of the 9 tested configurations as part of the sensitivity analysis. The name convention for these configurations is Dyn-<emaThreshold> <numBatchesBF16>. In addition, we include accuracy for BF16, MP, and FP32 executions. As shown in the figure, the accuracies obtained at each epoch are always above that of the BF16 technique. For early epochs (i.e., 2 and 4) the dynamic configurations remain between BF16 and FP32 accuracy, or even slightly above FP32, due to initial noise. As training advances all dynamic techniques behave similarly and present accuracies that are above BF16 and similar to those obtained with MP and FP32, as we would expect. The most important parameter is the emaThreshold, as it decides when a precision change occurs. As long as this parameter is reasonably set to detect training loss improvement or degradation the algorithm is bound to behave as expected. Prior work indicates that dynamic fixed-point is effective to train deep neural networks with low precision multipliers . This approach obtains state-of-the-art by uniformly applying the dynamic fixed point format with different scaling factors, which are driven by the overflow rate displayed by the fixed-point numbers. Our proposals target deeper neural networks than this approach and do not uniformly apply the same format to all network parameters. Instead, we differentiate between computations requiring FP32 during the whole training, like weight updates, from the ones that are well-suited for dynamic data representation schemes. Previous approaches show the benefits of applying stochastic rounding to 16-bit fixed-point multiply and add operators. This previous work rely on FPGA emulation to show the benefits of stochastic rounding when applied to a custom fully connected neural network dealing with the MNIST dataset. The authors also consider a CNN similar to LeNet-5 to enlarge their experimental campaign. Previous approaches propose a training process of DNN using 8-bit floating point numbers . They rely on a combination of 8-bit and 16-bit and additionally using stochastic rounding to obtain state-of-the-art . The neural networks used in this previous approach are much simpler than the ones we consider in this paper, which do not allow 8-bit arithmetic. The BF16 numerical format has been applied to specific-purpose hardware targeting deep neural networks . This specific hardware used an approach very similar to the mixed-precision techniques described in the paper by and, consequently, our Dynamic approach can be applied on top of them to reduce computing costs. This paper analyzes the instruction breakdown of workloads focused on deep neural network training that rely on mixed-precision training. We show that mixed-precision FMAs constitute around 60% of these workloads and propose two approaches based on half-precision FMAs to accelerate the training process without hurting accuracy. The first approach uses BF16 FMAs for most of the training workload, except routines involved in weight updates or batch normalization layers. This approach uses BF16 for more than 99% of the FMAs, which has a very strong potential for performance improvement, while reaching slightly smaller accuracy than the state-of-the-art. We propose a second approach that dynamically switches between different data representation formats. This dynamic approach uses BF16 for around 96% of the FMAs while reaching the same precision levels as the standard single-precision and mixedprecision approaches. Our two proposals are evaluated considering three state-of-the-art deep neural networks and a binary analysis tool that applies the required precision for each instruction. To the best of our knowledge, this is the first paper that demonstrates that half-precision can be used extensively on ≥94% of all FMAs during the training of very deep models without the need for mixed-precision arithmetic.
Dynamic precision technique to train deep neural networks
921
scitldr
We introduce the “inverse square root linear unit” (ISRLU) to speed up learning in deep neural networks. ISRLU has better performance than ELU but has many of the same benefits. ISRLU and ELU have similar curves and characteristics. Both have negative values, allowing them to push mean unit activation closer to zero, and bring the normal gradient closer to the unit natural gradient, ensuring a noise- robust deactivation state, lessening the over fitting risk. The significant performance advantage of ISRLU on traditional CPUs also carry over to more efficient HW implementations on HW/SW codesign for CNNs/RNNs. In experiments with TensorFlow, ISRLU leads to faster learning and better generalization than ReLU on CNNs. This work also suggests a computationally efficient variant called the “inverse square root unit” (ISRU) which can be used for RNNs. Many RNNs use either long short-term memory (LSTM) and gated recurrent units (GRU) which are implemented with tanh and sigmoid activation functions. ISRU has less computational complexity but still has a similar curve to tanh and sigmoid. Two popular activation functions for neural networks are the rectified linear unit (ReLU) BID6 and the exponential linear unit (ELU) BID5. The ReLU activation function is the identity for positive arguments and zero otherwise. The ELU activation function is the identity for positive arguments and has an exponential asymptotic approach to -1 for negative values. From previous analysis of the Fisher optimal learning, i.e., the natural gradient BID1 BID5, we can reduce the undesired bias shift effect without the natural gradient, either by centering the activation of incoming units at zero or by using activation functions with negative values. We introduce the inverse square root linear unit (ISRLU), an activation function like ELU, that has smoothly saturating negative values for negative arguments, and the identity for positive arguments. In addition this activation function can be more efficiently implemented than ELU in a variety of software or purpose-built hardware. The inverse square root linear unit (ISRLU) with α is DISPLAYFORM0 The ISRLU hyperparameter α controls the value to which an ISRLU saturates for negative inputs (see FIG0). ISRLUs and ELUs have very similar curves so at a high level one would expect to see the same general characteristics in most cases. ISRLUs have smooth and continuous first and second derivatives. ELUs are only continuous in the first derivative (see FIG0). In contrast, ReLU is non-differentiable at zero. Since ISRLUs and ELUs share most of the same characteristics we use the same weight initialization guidelines as are used for ELUs BID5 ). The primary advantage of ISRLU is in its reduced computational complexity compared to ELU. Inverse square roots are faster to calculate than exponentials. When calculating ISRLU for negative inputs, first one calculates 1/ √ 1 + αx 2. Multiplying this function by x provides the value for the forward calculation. Multiplying this function by itself twice (i.e. cubing) provides the value for back-propagation. With α = 1, ISRLU saturation approaches -1. With α = 3, the negative saturation is reduced, so a smaller portion of the back-propagated error signal will pass to the next layer. This allows the network to output sparse activations while preserving its ability to reactivate dead neurons. Note that under variations of the α parameter, the ISRLU curve and its derivative remain smooth and continuous. Future work will establish what deeper saturation (α < 1) is appropriate when applying ISRLU to self-normalizing neural networks BID11.In the same manner as parametric ReLUs (PReLUs) only one additional hyperparameter is required and methods can be used to directly learn its value during back-propagation BID8. Similarly, ISRLU's α can be learned during the training phase along with the weights and biases. Indeed for PReLUs, BID8 have empirically shown that learning the slope parameter "a" gives better performance than manually setting it to a pre-defined value.3 ACTIVATION FUNCTION PERFORMANCE BID14 showed that ELU was faster than the combination of ReLU and Batch Normalization for deep neural network (DNN) ResNet architectures. On CIFAR-10 and CIFAR-100 they showed that ELU not only speeds up learning but also improves the accuracy as the depth of the convolutional neural network (CNN) increases. More than learning rate needs to be considered when evaluating the overall performance of CNNs. The amount of time and computational resources required to perform both the convolutions and activation functions combined should be considered. The trend in CNNs is that less time is being spent calculating convolutions. There are three factors that we are seeing. First is that small convolution filters such as 5x5 or 3x3 filters are the basis of many architectures. Second, architectures as Inception-v3 and Inception-v4 now decompose 2d filters such as a 3x3 into a 3x1 filter and a 1x3 filter BID15. Third, more efficient calculations of convolution that rely on techniques such as Winograd's minimal filtering algorithm BID12 BID16 are being used for 3x3 and smaller filters as are FFTs to reduce calculation time in 5x5 or larger filters. All of these techniques reduce the amount of calculations for each element in the convolution output. TAB0 shows "cycles per output element" for an Intel Xeon Platinum 8160 (Skylake). Due to all of these reductions in convolution computational complexity, activation function performance is now a greater part of overall learning performance. Another characteristic that is changing with the use of smaller filters is the decrease in the compute intensity BID2 b), which raises the importance of memory systems performance for CNNs. The compute intensity of an algorithm is the ratio of the number of operations divided by number of words accessed. For a given algorithm it is straightforward to calculate the upper bound of the computation rate that can be supported on a given memory bandwidth. The main advantage of ISRLU over ELU is that it is based on the inverse square root, which has been faster to evaluate than the exponential for many generations of systems. In the past, whenever it has not been faster, optimization potentials for inverse square root implementation improvement have been found. It is instructive to understand the current CPU performance of the inverse square root intrinsic performance compared to exponentials and tanh. Intel x86 CPUs with SIMD instructions have vector intrinsic functions to accelerate performance. Intel publishes CPE (Clocks per Element) for various vector functions on their "Vector Mathematics (VM) Performance and Accuracy Data" website, see TAB1 . For example, on a 3x1 filter using ELU in the negative region, approximately the same CPE is required to evaluate the convolution as is required for the exponential (cf. TAB0). Improvements in activation function performance will impact overall time spent in each learning step. We measured the vector performance of AVX2 implementations for the various activation functions. The dataset used was 50% negative and 50% positive. Results are shown in TAB2. These show that ISRLU (α = 1.0) is 2.6× faster than ELU. The fast approximation of ISRLU is within 1% of the evaluation speed of ReLU while still retaining all of the desired learning curve properties mentioned in this paper. This fast approximation for ISRLU on this processor has only 3 × 10 −4 maximum relative error (∼11.6 accurate bits). One Newton-Raphson iteration doubles that to ∼23.4 accurate bits out of the 24 bits of mantissa, and two iterations achieves full precision. We plan to evaluate if the fast approximation has similar learning rates of the full precision ISRLU. It is instructive to look at a practical trick for the computation of the inverse square root as it may serve as inspiration for those implementing ISRLU in hardware. Software implementations on CPUs can take advantage of floating-point formats for faster evaluation of the inverse square root. John Carmack and Terje Mathisen are often associated with implementing fast inverse square root in 2002 BID13. In 1986, one of the authors of this paper originally invented this method, which was called "The K Method," to implement vector square root for the production FPS T Series Hypercube Supercomputer BID7. William Kahan and K.C. Ng at Berkeley also independently discovered this around 1986.Carmack & Mathisen only used one iteration of the Newton method after their fast approximation. One iteration had an error of approximately 0.175%, which was suitable for their graphics applications. Since various piecewise functions have been used to approximate activation functions for CNNs and RNNs, part of our future research will look into if fast approximations to ISRLUs are suitable for DNNs. Another avenue to look at for hardware implementations of the inverse square root is table-lookup hardware. Our expectation is that an efficient hardware approximation for the inverse square root should take about the same execution time as a fused multiply and add (FMA). We used TensorFlow BID0 to train a CNN on the (Lecun) MNIST dataset. We tested the MNIST gray images in 10 classes, 60k train and 10k test. The first CNN architecture (see TAB3) in our experiments used 28x28 input, a convolutional layer with 6x6 with 6 feature maps, a convolutional layer with 5x5 with 12 feature maps, a convolutional layer with 4x4 with 24 feature maps, a fully connected layer of 1176 hidden units, and a softmax output layer with 10 units. Only a full-precision ISRLU was used in these initial tests due to time constraints. Convolutional neural networks with ISRLUs (α = 1.0, α = 3.0), ELUs (α = 1.0), and ReLUs were trained on the MNIST digit classification dataset while each hidden units activation was tracked. Each network was trained for 17 epochs by using ADAM optimizer with learning rate 0.003 exponentially decreasing to 0.0001 and mini-batches of size 100. The weights have been initialized to truncated normal with standard deviation 0.1. The training error of ISRLU networks decreases much more rapidly than for the other networks. We also calculated the final cross-entropy loss function for each test. The second CNN architecture (see TAB4) in our experiments used 28x28 input, a convolutional layer with 3x3 with 64 feature maps, a convolutional layer with 3x3 with 64 feature maps, 2x2 Maxpooling, DropOut, a convolutional layer with 3x3 with 64 feature maps, a convolutional layer with 3x3 with 64 feature maps, 2x2 Maxpooling, DropOut, a fully connected (FC) layer of 512 hidden units, and a softmax output layer with 10 units. Full-precision ISRLU was used. Convolutional neural networks with ISRLUs (α = 1.0, α = 3.0) and ELUs (α = 1.0) were trained on the MNIST digit classification dataset while each hidden units activation was tracked. The network was trained for 20 epochs by using ADAM optimizer with learning rate 0.003 exponentially decreasing to 0.0001 and mini-batches of size 100. The weights have been initialized to truncated normal with standard deviation 0.1. We did not expect significant differences in accuracy in ISRLU and ELU in this test of shallow networks due to the similar nature of the curves. The cross-entropy loss was reasonable, at between 2 and 3.2 for all activation functions. Future testing will be done on deeper networks where we expect larger advantages that are similar to ELU BID5 BID14. The work with ISRLU in this paper suggests that the inverse square root unit (ISRU) may be useful for a variety of neural networks. ISRUs are defined as: DISPLAYFORM0 In RNNs that use LSTM BID9 and GRU BID4, the most common activation functions are sigmoid and tanh. We assert that ISRUs can be more efficient calculation than tanh and be more efficient than sigmoid when properly shifted and scaled. As shown above in TAB1, the inverse square root is 3x to 6x faster than tanh (depending on x86 architecture). ISRUs will be an area of our future research. Activation function performance is becoming more important overall in convolutional neural networks (CNNs) because of the trending reductions in the computational complexity of the convolutions used in CNNs. We have introduced a new activation function, the inverse square root linear unit (ISRLU) for faster and precise learning in deep convolutional neural networks. ISRLUs have similar activation curves to ELUs, including the negative values. This decreases the forward propagated variation and brings the mean activations to zero. Mean activations close to zero decreases the bias shift for units in the next layer which speeds up learning by bringing the natural gradient closer to the unit natural gradient. Future work may prove the effectiveness of applying ISRLUs and the related ISRUs to other network architectures, such as recurrent neural networks, and to other tasks, such as object detection. ISRLUs have lower computational complexity than ELUs. Even greater savings on computation can be realized by implementing ISRLUs in custom hardware implementations. We expect ISRLU activations to increase the training efficiency of convolutional networks.
We introduce the ISRLU activation function which is continuously differentiable and faster than ELU. The related ISRU replaces tanh & sigmoid.
922
scitldr
A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution. This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems. We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon. As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems. We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not. This paper seeks to tackle the question of how to build machines that leverage prior experience to solve more complex problems than they have previously encountered. How does a learner represent prior experience? How does a learner apply what it has learned to solve new problems? Motivated by these questions, this paper aims to formalize the idea of, as well as to develop an understanding of the machinery for, compositional generalization in problems that exhibit compositional structure. The solutions for such problems can be found by composing in sequence a small set of reusable partial solutions, each of which tackles a subproblem of a larger problem. The central contributions of this paper are to frame the shared structure across multiple tasks in terms of a compositional problem graph, propose compositional generalization as an evaluation scheme to test the degree a learner can apply previously learned knowledge to solve new problems, and introduce the compositional recursive learner, a domain-general framework 1 for sequentially composing representation transformations that each solve a subproblem of a larger problem. The key to our approach is recasting the problem of generalization as a problem of learning algorithmic procedures over representation transformations. A solution to a (sub)problem is a transformation between its input and output representations, and a solution to a larger problem composes these subsolutions together. Therefore, representing and leveraging prior problem-solving experience amounts to learning a set of reusable primitive transformations and their means of composition that reflect the structural properties of the problem distribution. This paper introduces the compositional recursive learner (CRL), a framework for learning both these transformations and their composition together with sparse supervision, taking a step beyond other approaches that have assumed either pre-specified transformation or composition rules (Sec. 5). CRL learns a modular recursive program that iteratively re-represents the input representation into more familiar representations it knows how to compute with. In this framework, a transformation between representations is encapsulated into a computational module, and the overall program is the sequential combination of the inputs and outputs of these modules, whose application are decided by a controller. What sort of training scheme would encourage the spontaneous specialization of the modules around the compositional structure of the problem distribution? First, exposing the learner to a diverse distribution of compositional problems helps it pattern-match across problems to distill out common functionality that it can capture in its modules for future use. Second, enforcing that each module have only a local view of the global problem encourages task-agnostic functionality that prevents the learner from overfitting to the empirical training distribution; two ways to do this are to constrain the model class of the modules and to hide the task specification from the modules. Third, training the learner with a curriculum encourages the learner to build off old solutions to solve new problems by re-representing the new problem into one it knows how to solve, rather than learning from scratch. How should the learner learn to use these modules to exploit the compositional structure of the problem distribution? We can frame the decision of which computation to execute as a reinforcement learning problem in the following manner. The application of a sequence of modules can be likened to the execution trace of the program that CRL automatically constructs, where a computation is the application of a module to the output of a previous computation. The automatic construction of the program can be formulated as the solution to a sequential decision-making problem in a meta-level Markov decision process (MDP) , where the state space is the learner's internal states of computation and the action space is the set of modules. Framing the construction of a program as a reinforcement learning problem allows us to use techniques in deep reinforcement learning to implement loops and recursion, as well as decide on which part of the current state of computation to apply a module, to re-use sub-solutions to solve a larger problem. Our experiments on solving multilingual arithmetic problems and recognizing spatially transformed MNIST digits BID26 show that the above proposed training scheme prescribes a type of reformulation: re-representing a new problem in terms of other problems by implicitly making an analogy between their solutions. We also show that our meta-reasoning approach for deciding what modules to execute achieves better generalization to more complex problems than monolithic learners that are not explicitly compositional. Solving a problem simply means representing it so as to make the solution transparent. BID60 Humans navigate foreign cities and understand novel conversations despite only observing a tiny fraction of the true distribution of the world. Perhaps they can extrapolate in this way because the world contains compositional structure, such that solving a novel problem is possible by composing previously learned partial solutions in a novel way to fit the context. With this perspective, we propose the concept of compositional generalization. The key assumption of compositional generalization is that harder problems are composed of easier problems. The problems from the training and test sets share the same primitive subproblems, but differ in the manner and complexity with which these subproblems are combined. Therefore, problems in the test set can be solved by combining solutions learned from the training set in novel ways. Definition. Let a problem P be a pair (X in, X out), where X in and X out are random variables that respectively correspond to the input and output representations of the problem. Let the distribution of X in be r in and the distribution of X out be r out. To solve a particular problem P = p is to transform X in = x in into X out = x out. A composite problem p a = p b • p c is that for which it is possible to solve by first solving p c and then solving p b with the output of p c as input. p b and p c are subproblems with respect to p a. The space of compositional problems form a compositional problem graph, whose nodes are the representation distributions r. A problem is described as pair of nodes between which the learner must learn to construct an edge or a path to transform between the two representations. Characteristics. First, there are many ways in which a problem can be solved. For example, translating an English expression to a Spanish one can be solved directly by learning such a transformation, or a learner could make an analogy with other problems by first translating English to French, and then French to Spanish as intermediate subproblems. Second, sometimes a useful (although not only) way to solve a problem is indicated by the recursive structure of the problem itself: solving the arithmetic expression 3 + 4 × 7 modulo 10 can be decomposed by first solving the subproblem 4 × 7 = 8 and then 3 + 8 = 1. Third, because a problem is just an (input, output) pair, standard problems in machine learning fit into this broadly applicable framework. For example, for a supervised classification problem, the input representation can be an image and the output representation a label, and intermediate subproblems can Broad Applicability. Problems in supervised, unsupervised, and reinforcement learning can all be viewed under the framework of transformations between representations. What we gain from the compositional problem graph perspective is a methodological way to relate together different problems of various forms and complexity, which is especially useful in a lifelong learning setting: the knowledge required to solve one problem is composed of the knowledge required to solve subproblems seen in the past in the context of different problems. For example, we can view latent variable reinforcement learning architectures such as (; BID36 as simultaneously solving an image reconstruction problem and an action prediction problem, both of which share the same subproblem of transforming a visual observation into a latent representation. Lifelong learning, then, can be formulated as not only modifying the connections between nodes in the compositional problem graph but also continuing to make more connections between nodes, gradually expanding the frontier of nodes explored. Sec. 4 describes how CRL takes advantage of this compositional formulation in a multi-task zero-shot generalization setup to solve new problems by re-using computations learned from solving past problems. Evaluation. To evaluate a learner's capacity for compositional generalization, we introduce two challenges. The first is to generalize to problems with different subproblem combinations from what the learner has seen. The second is to generalize to problems with longer subproblems combinations than the learner has seen. Evaluating a learner's capability for compositional generalization is one way to measure how readily old knowledge can be reused and hence built upon. This paper departs from the popular representation-centric view of knowledge BID11 and instead adopts a computation-centric view of knowledge: our goal is to encapsulate useful functionality shared across tasks into specialized computational modules -atomic function operators that perform transformations between representations. This section introduces the compositional recursive learner (CRL), a framework for training modules to capture primitive subproblems and for composing together these modules as subproblem solutions to form a path between nodes of the compositional problem graph. The CRL framework consists of a controller π, a set of modules m ∈ M, and an evaluator E. Training CRL on a diverse compositional problem distribution produces a modular recursive program that is trained to transform the input X in into its output X out, the corresponding samples of which are drawn from pairs of nodes in the compositional problem graph. In this program, the controller looks at the current state x i of the program and chooses a module m to apply to the state. The evaluator executes the module on that state to produce the next state x i+1 of the program. X in is the initial state of the program,X out is the last, and the intermediate states X i of the execution trace correspond to the other representations produced and consumed by the modules. The controller can choose to re-use modules across different program executions to solve different problems, making it straightforward to re-use computation learned from solving other problems to solve the current one. The controller can also choose to reuse modules several times within the same program execution, which produces recursive behavior. The sequential decision problem that the controller solves can be formalized as a meta-level Markov decision process (meta-MDP) , whose state space corresponds to the intermediate states of computation X, whose action space corresponds to the modules M, and whose transition model corresponds to the evaluator E. The symbiotic relationship among these components is shown in FIG2. In the bounded-horizon version of CRL (Sec. 4.2), the meta-MDP has a finite horizon whose length is determined by the complexity of the current problem. In the infinite-horizon version of CRL (Sec. 4.1), the program itself determines when to halt when the controller selects the HALT signal. When the program halts, in both versions the current state of computation is produced as outputx out, and CRL receives a terminal reward that reflects howx out matches the desired output x out. The infinite-horizon CRL also incurs a cost for every computation it executes to encourage it to customize its complexity to the problem. Note the following key characteristics of CRL. First, unlike standard reinforcement learning setups, the state space and action space can vary in dimensionality across and within episodes because CRL trains on problems of different complexity, reducing more complex problems to simpler ones (Sec. 4.1). Second, because the meta-MDP is internal to CRL, the controller shapes the meta-MDP by choosing which modules get trained and the meta-MDP in turn shapes the controller through its non-stationary state-distribution, action-distribution, and transition function. Thus CRL simultaneously designs and solves reinforcement learning problems "in its own mind," whose dynamics depend just as much on the intrinsic complexity of the problem as well as the current problem-solving capabilities of CRL. The solution that we want CRL to discover lies between two extremes, both of which have their own drawbacks. One extreme is where CRL learns a module specialized for every pair of nodes in the compositional problem graph, and the other is where CRL only learns one module for all pairs of nodes. Both extremes yield a horizon-one meta-MDP and are undesirable for compositional generalization: the former does not re-use past knowledge and the latter cannot flexibly continuously learn without suffering from negative transfer. What is the best solution that CRL could discover? For a given compositional problem graph, an optimal solution would be to recover the original compositional problem graph such that the modules exactly capture the subproblems and the controller composes these modules to reflect how the subproblems were originally generated. By learning both the parameters of the modules and the controller that composes them, during CRL would construct its own internal representation of the problem graph, where the functionality of the modules produces the nodes of the graph. How can we encourage CRL's internal graph to reflect the original compositional problem graph?We want to encourage the modules to capture the most primitive subproblems, such that they can be composed as atomic computations for other problems. To do this, we need to enforce that each module only has a local view of the global problem. If tasks are distinguished from each other based on the input (see Sec. 4.2), we can use domain knowledge to restrict the representation vocabulary and the function class of the modules. If we have access to a task specification (e.g. goal or task id) in addition to the input, we can additionally give only the controller access to the task specification while hiding it from the modules. This forces the modules to be task agnostic, which encourages that they learn useful functionality that generalizes across problems. Because the the space of subproblem compositions is combinatorially large, we use a curriculum to encourage solutions for the simpler subproblems to converge somewhat before introducing more complex problems, for which CRL can learn to solve by composing together the modules that had been trained on simpler problems. Lastly, to encourage the controller to generalize to new node combinations it has not seen, we train on a diverse distribution of compositional problems, such that the controller does not overfit to any one problem. This encourages controller to make analogies between problems during training by re-using partial solutions learned while solving other problems. Our experiments show that this analogy-making ability helps with compositional generalization because the controller solves new or more complex subproblem combinations by re-using modules that it learned during training. which, even with ten times more data, does not generalize to 10-length multilingual arithmetic expressions. Pretraining the RNN on domain-specific auxiliary tasks does not help the 10-length case, highlighting a limitation of using monolithic learners for compositional problems. By comparing CRL with a version trained without a curriculum ("No Curr": blue), we see the benefit of slowly growing the complexity of problems throughout training, although this benefit does not transfer to the RNN. The vertical black dashed line indicates at which point all the training data has been added when CRL is trained with a curriculum (red). The initial consistent rise of the red training curve before this point shows CRL exhibits forward transfer BID30 to expressions of longer length. Generalization becomes apparent only after a million iterations after all the training data has been added. (b, c) only show accuracy on the expressions with the maximum length of those added so far to the curriculum. "1e4" and "1e5" correspond to the order of magnitude of the number of samples in the dataset, of which 70% are used for training. 10, 50, and 90 percentiles are shown over 6 runs. The main purpose of our experiments is to test the hypothesis that explicitly decomposing a learner around the structure of a compositional problem distribution yields significant generalization benefit over the standard paradigm of training a single monolithic architecture on the same distribution of problems. To evaluate compositional generalization, we select disjoint subsets of node pairs for training and evaluating the learner. Evaluating on problems distinct from those in training tests the learner's ability to apply what it has learned to new problems. To demonstrate the broad applicability of the compositional graph, we consider the structured symbolic domain of multilingual arithmetic and the underconstrained and high-dimensional domain of transformed-MNIST classification. We find that composing representation transformations with CRL achieves significantly better generalization when compared to generic monolithic learners, especially when the learner needs to generalize to problems with longer subproblem combinations than those seen during training. In our experiments, the controller and modules begin as randomly initialized neural networks. The loss is backpropagated through the modules, which are trained with Adam . The controller receives a sparse reward derived from the loss at the end of the computation, and a small cost for each computational step. The model is trained with proximal policy optimization BID58. This experiment evaluates the infinite-horizon CRL in a multi-objective, variable-length input, symbolic reasoning multi-task setting. A task is to simplify an arithmetic expression expressed in a source language, encoded as variable-length sequences of one-hot tokens, and produce the answer modulo 10 in a given target language. To evaluate compositional generalization, we test whether, after having trained on 46200 examples of 2, 3, 4, 5-length expressions (2.76 · 10 −4 of the training distribution) involving 20 of the 5 × 5 = 25 pairs of five languages, the learner can generalize to 5-length and 10-length expressions involving the other five held-out language pairs (problem space: 4.92 · 10 15 problems). To handle the multiple target languages, the CRL controller receives a onehot token for the target language at every computational step additional to the arithmetic expression. The CRL modules consist of two types of feedforward networks: reducers and translators, which do not know the target language and so can only make local progress on the global problem. Reducers transform a consecutive window of three tokens into one token, and translators transform all tokens in a sequence by the same transformation. The CRL controller also selects where in the arithmetic Figure 4: Left: For multilingual arithmetic, blue denotes the language pairs for training and red denotes the language pairs held out for evaluation in FIG3. Center: For transformed MNIST classification, blue denotes the length-2 transformation combinations that produced the input for training, red denotes the length-2 transformation combinations held out for evaluation. Not shown are the more complex length-3 transformation combinations (scale then rotate then translate) we also tested on. Right: For transformed MNIST classification, each learner performs better than the others in a different metric: the CNN performs best on the training subproblem combinations, the STN on different subproblem combinations of the same length as training, and CRL on longer subproblem combinations than training. While CRL performs comparably with the others in the former two metrics, CRL's ∼ 40% improvement for more complex image transformations is significant. expression to apply a reducer. We trained by gradually increasing the complexity of arithmetic expressions from length two to length five. Quantitive in FIG3 show that CRL achieves significantly better compositional generalization than a recurrent neural network (RNN) baseline trained to directly map the expression to its answer, even when the RNN has been pretrained or receives 10x more data. Fig. 9 shows that CRL achieves about 60% accuracy for extrapolating to 100-term problems (problem space: 4.29 · 10 148).The curriculum-based training scheme encourages CRL to designs its own edges and paths to connect nodes in the compositional problem graph, solving harder problems with the solutions from simpler ones. It also encourages its internal representations to mirror the external representations it observes in the problem distribution, even though it has no direct supervision to do so. However, while this is often the case, qualitative in FIG4 show that CRL also comes up with its own internal language -hybrid representations that mix different external representations together -to construct compositional solutions for novel problems. Rather than learn translators and reducers that are specific to single input and output language pair as we had expected, the modules, possibly due to their nonlinear nature, tended to learn operations specific to the output language only. This experiment evaluates the bounded-horizon CRL in a single-objective, latent-structured, highdimensional multi-task setting. A task is to classify an MNIST digit, where the MNIST digit has been randomly translated (left, right, up, down), rotated (left, right), and scaled (small, big). Suppose CRL has knowledge of what untransformed MNIST digits look like; is it possible that CRL can learn to compose appropriate spatial affine transformations in sequence to convert the transformed MNIST digit into a "canonical" one, such that it can use a pre-trained classifier to classify it? To reformulate a scenario to one that is more familar is characteristic of compositional generalization humans: humans view an object at different angles yet understand it is the same object; they may have an accustomed route to work, but can adapt to a detour if the route is blocked. To evaluate compositional generalization, we test whether, having trained on images produced by combinations of two spatial transformations, CRL can can generalize to different length-2 combinations as well as length-3 combinations. A challenge in this domain is that the compositional structure is latent, rather than apparent in the input for the learner to exploit. CRL is initialized with four types of modules: a Spatial Transformer Network (STN) BID15 parametrized to only rotate, an STN that only scales, an STN that only translates, and length-5 to length-10 expressions. The input is 0 − 6 + 1 + 7 × 3 × 6 − 3 + 7 − 7 × 7 expressed in Pig Latin. The desired output is seis, which is the value of the expression, 6, expressed in Spanish. The purple modules are reducers and the red modules are translators. The input to a module is highlighted and the output of the module is boxed. The controller learns order of operations. Observe that reducer m9 learns to reduce to numerals and reducer m10 to English terms. The task-agnostic nature of the modules forces them to learn transformations that the controller would commonly reuse across problems. Even if the problem may not be compositionally structured, such as translating Pig Latin to Spanish, CRL learns to design a compositional solution (Pig Latin to Numerals to Spanish) from previous experience (Pig Latin to Numerals and Numerals to Spanish) in order to generalize: it first reduces the Pig Latin expression to a numerical evaluation, and then translates that to its Spanish representation using the translator m6. Note that all of this computation is happening internally to the learner, which computes on softmax distributions over the vocabulary; for visualization we show the token of the distribution with maximum probability. MNIST digit into canonical position, and generalizes to different and longer compositions of generative transformations. m0 is constrained to output the sine and cosine of a rotation angle, m1 is constrained to output the scaling factor, and m2 through m13 are constrained to output spatial translations. Some modules like m2 and m6 learn to translate up, some like m3 and m10 learn to translate down, some like m7 learn to shift right, and some like m13 learn to shift left. Consider (d): the original generative transformations were "scale big" then "translate left," so the correct inversion should be "translate right" then "scale small." However, CRL chose to equivalently "scale small" and then "translate right." CRL also creatively uses m0 to scale, as in (e) and (f), even though its original parametrization of outputting sine and cosine is biased towards rotation.an identity function. All modules are initialized to perform the identity transformation, such that symmetry breaking (and their eventual specialization) is due to the stochasticity of the controller. Quantitative in Fig. 4 show that CRL achieves significantly better compositional generalization than both the standard practice of finetuning the convolutional neural network pretrained classifier and training an affine-STN as a pre-processor to the classifier. Both baselines perform better than CRL on the training set, and the STN's inductive bias surprisingly also allows it to generalize to different length-2 combinations. However, both baselines achieve only less than one-third of CRL's generalization performance for length-3 combinations, which showcases the value of explicitly decomposing problems. Note that in FIG5 the sequence of transformations CRL performs are not necessarily the reverse of those that generated the original input, which shows that CRL has learned its own internal language for representing nodes in the problem graph. Several recent and contemporaneous work BID24 BID29 BID31 BID7 have tested in whether neural networks exhibit systematic compositionality (; BID32 ; BID33) in parsing symbolic data. This paper draws inspiration from and builds upon re-search in several areas to propose an approach towards building a learner that exhibits compositional generalization. We hope this paper provides a point of unification among these areas through which further connections can be strengthened. Transformations between representations: Our work introduces a learner that exhibits compositional generalization in some sense by bridging deep learning and reformulation, or re-representing a problem to make it easier to solve BID12 BID59 BID2 ) by making analogies BID39 to previously encountered problems. Taking inspiration from meta-reasoning BID47;; ) in humans (; BID27 BID27, CRL generalize to new problems by composing representation transformations (analogous to the subprograms in), an approach for which recent and contemporaneous work BID52 BID0 ) provide evidence. Meta-learning: Our modular perspective departs from recent work in meta-learning BID68 BID53 which assume that the shared representation of monolithic architectures can be shaped by the diversity of tasks in the training distribution as good initializations for future learning (; BID38 BID43 BID4 ; BID34 BID25 ; b; a; BID64 .Graph-based architectures: Work in graph-based architectures have studied combinatorial generalization in the context of modeling physical systems BID72 BID9 BID50 BID70 . Whereas these works focus on factorizing representations, we focus on factorizing the computations that operate on representations. Just as the motivation behind disentangled representations BID72 BID21 ; BID11) is to uncover the latent factors of variation, the motivation behind disentangled programs is to uncover the latent organization of a task. Compositional approaches (as opposed to memory-augmented (; BID65 BID16 ; BID23 BID4) or monolithic BID75 BID17 approaches for learning programs) to the challenge of discovering reusable primitive transformations and their means of composition generally fall into two categories. The first assumes pre-specified transformations and learns the structure (from dense supervision on execution traces to sparse-rewards) BID44; BID74 BID28;;; Džeroski et al., 2001; BID76 BID54. The second learns the transformations but pre-specifies the structure BID3 BID45 BID28. These approaches are respectively analogous to our hardcoded-functions and hardcoded-controller ablations in Fig. 7. The closest works to ours from a program induction perspective are (; BID69, both neurosymbolic approaches for learning differentiable programs integrated in a high-level programming language. Our work complements theirs by casting the construction of a program as a reinforcement learning problem, and we believe that more tightly integrating CRL with types and combinators would be an exciting direction for future work. Lifelong Learning: CRL draws inspiration from work BID53 ; BID56 BID57) on learners that learn to design their own primitives and subprograms for solving an increasingly large number of tasks. The simultaneous optimization over the the continuous function parameters and their discrete compositional structure in CRL is inspired by the interplay between abstract and concrete knowledge that is hypothesized to characterize cognitive development: abstract structural priors serve as a scaffolding within which concrete, domain-specific learning takes place BID62 BID42, but domain-specific learning about the continuous semantics of the world can also provide feedback to update the more discrete structural priors .Hierarchy: Several works have investigated the conditions in which hierarchy is useful for humans (; BID61 BID48 ; our experiments show that the hierarchical structure of CRL is more useful than the flat structure of monolothic architectures for compositional generalization. Learning both the controller and modules relates CRL to the hierarchical reinforcement learning literature BID8, where recent work BID6 ; BID71 BID35 attempting to learn both lower-level policies as well as a higher-level policy that invokes them. Modularity: Our idea of selecting different weights at different steps of computation is related to the fast-weights literature BID55 BID5, but those works are motivated by learning context-dependent associative memory BID13 BID73 BID20 BID1) rather than composing representation transformations, with the exception of BID51. CRL can be viewed as a recurrent mixture of experts BID14, where each expert is a module, similar to other recent and contemporaneous work (; BID46 BID19) that route through a choices of layers of a fixed-depth architecture for multi-task learning. The closest work to ours from an implementation perspective is BID46. However, these works do not address the problem of generalizing to more complex tasks because they do not allow for variable-length compositions of the modules. BID40 focuses on a complementary direction to ours; whereas they focus on learning causal mechanisms for a single step, we focus on learning how to compose modules. We believe composing together causal mechanisms would be an exciting direction for future work. This paper sought to tackle the question of how to build machines that leverage prior experience to solve more complex problems than they have seen. This paper makes three steps towards the solution. First, we formalized the compositional problem graph as a language for studying compositionally-structured problems of different complexity that can be applied on various problems in machine learning. Second, we introduced the compositional generalization evaluation scheme for measuring how readily old knowledge can be reused and hence built upon. Third, we presented the compositional recursive learner, a domain-general framework for learning a set of reusable primitive transformations and their means of composition that reflect the structural properties of the problem distribution. In doing so we leveraged tools from reinforcement learning to solve a program induction problem. There are several directions for improvement. One is to stabilize the simultaneous optimization between discrete composition and continuous parameters; currently this is tricky to tune. Others are to generate computation graphs beyond a linear chain of functions, and to infer the number of functions required for a family of problems. A major challenge would be to discover the subproblem decomposition without a curriculum and without domain-specific constraints on the model class of the modules. argued that the efficient use cognitive resources in humans may also explain their ability to generalize, and this paper provides evidence that reasoning about what computation to execute by making analogies to previously seen problems achieves significantly higher compositional generalization than non-compositional monolithic learners. Encapsulating computational modules grounded in the subproblem structure also may pave a way for improving interpretability of neural networks by allowing the modules to be unit-tested against the subproblems we desire them to capture. Because problems in supervised, unsupervised, and reinforcement learning can all be expressed under the framework of transformations between representations in the compositional problem graph, we hope that our work motivates further research for tackling the compositional generalization problem in many other domains to accelerate the long-range generalization capabilities that are characteristic of general-purpose learning machines. Multilingual arithmetic (Sec. 4.1): The dataset contains arithmetic expressions of k terms where the terms are integers ∈ and the operators are ∈ {+, ·, −}, expressed in five different languages. With 5 choices for the source language and target language, the number of possible problems is (10 k)(3 k−1)(5 2). In training, each source language is seen with 4 target languages and each target language is seen with 4 source languages: 20 pairs are seen in training and 5 pairs are held out for testing. The learner sees 46200/(1.68 · 10 8) = 2.76 · 10 −4 of the training distribution. The entire space of possible problems in the extrapolation set is (10 10)(3 9)(5 2) = 4.92 · 10 15 out of which we draw samples from the 5 held-out language pairs (10 10)(3 9) = 9.84 · 10 14 possible. An input expression is a sequence of one-hot vectors of size 13 × 5 + 1 = 66 where the single additional element is a STOP token (for training the RNN). Spatially transformed MNIST (Sec. 4.2): The generative process for transforming the standard MNIST dataset to the input the learner observes is described as follows. We first center the 28x28 MNIST image in a 42x42 black . We have three types of transformations to apply to the image: scale, rotate, and translate. We can scale big or small (by a factor of 0.6 each way). We can rotate left or right (by 45 degrees each direction). We can translate left, right, up, and down, but the degree to which we translate depends on the size of the object: we translate the digit to the edge of the image, so smaller digits get translated more than large digits. Large digits are translated by 20% of the image width, unscaled digits are translated by 29% of the image width, and small digits are translated by 38% of the image width. In total there are 2 + 2 + 4 × 3 = 16 individual transformation operations used in the generative process. Because some transformation combinations are commutative, we defined an ordering with which we will apply the generative transformations: scale then rotate then translate. For length-2 compositions of generative transformations, there are scale-small-then-translate (1 × 4), scale-big-then-translate (1 × 4), rotate-then-translate (2 × 4), and scale-then-rotate (2 × 2). We randomly choose 16 of these 20 for training, 2 for validation, 2 for test, as shown in Figure 4 (center). For length-3 compositions of generative transformations, there are scale-small-then-rotate-then-translate (1×2×4) and scale-big-then-rotate-then-translate (1×2×4). All 16 were held out for evaluation. All learners are implemented in PyTorch BID41 ) and the code is available at https: //github.com/mbchang/crl. Baseline: The RNN is implemented as a sequence-to-sequence ) gated recurrent unit (GRU) . The controller consists of a policy network and a value function, each implemented as GRUs that read in the input expression. The value function outputs a value estimate for the current expression. For the numerical arithmetic task, the policy network first selects a reducer and then conditioned on that choice selects the location in the input expression to apply the reducer. For the multilingual arithmetic task, the policy first samples whether to halt, reduce, or translate, and then conditioned on that choice (if it doesn't halt) it samples the reducer (along with an index to apply it) or the translator. The reducers are initialized as a two-layer feedforward network with ReLU nonlinearities BID37. The translators are a linear weight matrices. Baselines: The CNN is a variant of an all-convolutional network BID63. This was also used as the pre-trained image classifier. The affine-STN predicts all 6 learnable affine parameters as in BID15. The controller consists of a policy network and a value function, each implemented with the same architecture as the CNN baseline. The rotate-STN's localization network is constrained to output the sine and cosine of a rotation angle, the scale-STN's localization network is constrained to output the scaling factor, and the translate-STN's localization network is constrained to output spatial translations C EXPERIMENT DETAILS Training procedure: The training procedure for the controller follows the standard Proximal Policy Optimization training procedure, where the learner samples a set of episodes, pushes them to a replay buffer, and every k episodes updates the controller based on the episodes collected. Independently, every k episodes we consolidate those k episodes into a batch and use it to train the modules. We found via a grid search k = 1024 and k = 256. Through an informal search whose heuristic was performance on the training set, we settled on updating the curriculum of CRL every 10 5 episodes and updating the curriculum of the RNN every 5 · 10 4 episodes. In the case that HALT is called to early, CRL treats it as a no-op. Similarly, if a reduction operator is called when there is only one token in the expression, the learner also treats it as a no-op. There are other ways around this domain-specific nuance, such as to always halt whenever HALT is called but only do backpropagation from the loss if the expression has been fully reduced (otherwise it wouldn't make sense to compute a loss on an expression that has not been fully reduced). The way we interpret these "invalid actions" is analogous to a standard practice in reinforcement learning of keeping an agent in the same state if it walks into a wall of a maze. Symmetry breaking: We believe that the random initialization of the modules and the controller breaks the symmetry between the modules. For episodes 0 through k the controller still has the same random initial weights, and for episodes 0 through k the modules still have the same random initial weights. Because of the initial randomness, the initial controller will select certain modules more than others for certain inputs; similarly initially certain modules will perform better than others for certain inputs. Therefore, after k episodes, the controller's parameters will update in a direction that will make choosing the modules that luckily performed better for certain inputs more likely; similarly, after k episodes, the modules' parameters will update in a direction that will make them better for the inputs they have been given. So gradually, modules that initially were slightly better at certain inputs will become more specialized towards those inputs and they will also get selected more for those inputs. Training objective: The objective of the composition of modules is to minimize the negative log likelihood of the correct answer to the arithmetic problem. The objective of the controller is to maximize reward. It receives a reward of 1 if the token with maximum log likelihood is that of the correct answer, 0 if not, and −0.01 for every computation step it takes. The step penalty was found by a scale search over {−1, −0.1, −0.01, −0.001} and −0.01 was a penalty that we found balanced accuracy and computation time to a reasonable degree during training. There is no explicit feedback on what the transformations should be and on how they are composed. Training procedure: The training procedure is similar to the mulitlingual arithmetic case. We update the policy every 256 episodes and the modules everye 64 episodes. We observed that directly training for large translations was unstable, so to overcome this we used a curriculum. The curriculum began without any translation, then increased the direction of translation by 1% of the image width every 3 · 10 4 episodes until the amount of translation matched 20% of the image width for large digits, 29% of the image width for unscaled digits, and 38% of the image width for small digits. Unlike in the multilingual arithmetic case, during later stages of the curriculum we do not continue training on earlier stages of the curriculum. In the bounded-horizon setup, we manually halt CRL according to the length of the generative transformation combinations of the task: if the digit was generated by applying two transformations, then we halt CRL's controller after it selects two modules. Therefore, we did not use a step-penalty in this experiment. Symmetry breaking: The transformation parameters were initialized to output an identity transformation, although the the localization network were randomly initialized across modules, which breaks the symmetry among the modules. Training objective: The objective is to classify a transformed MNIST digit correctly based on the negative log likelihood of the correct classification from a pre-trained classifier. The objective of the controller is to maximize reward. It receives a reward of 1 for a correct classification and 0 if not. There is no explicit feedback on what the transformations should be and on how they are composed. The input is a numerical arithmetic expression (e.g. 3 + 4 × 7) and the desired output (e.g. 1) is the evaluation of the expression modulo 10. In our experiments we train on a curriculum of length-2 expressions to length-10 expressions, adding new expressions to an expanding dataset over the course of training. The first challenge is to learn from this limited data (only 6510 training expressions) to generalize well to unseen length-10 expressions in the test set (≈ 2 14 possible). The second challenge is to extrapolate from this limited data to length-20 expressions (≈ 10 29 possible). We compare with an RNN architecture directly trained to map input to output. Though the RNN eventually generalizes to different 10-length expressions and extrapolates to 20-length expressions (yellow in Fig. 7) with 10 times more data as CRL, it completely overfits when given the same amount of data (gray). In contrast, CRL (red) does not overfit, generalizing significantly better to both the 10-length and 20-length test sets. We believe that the modular disentangled structure in CRL biases it to cleave the problem distribution at its joints, yielding this 10-fold reduction in sample complexity relative to the RNN.We found that the controller naturally learned windows centered around operators (e.g. 2 + 3 rather than ×4−), suggesting that it has discovered semantic role of these primitive two-term expressions by pattern-matching common structure across arithmetic expressions of different lengths. Note that CRL's extrapolation accuracy here is not perfect compared to ; however CRL Figure 7: Numerical math task. We compare our learner with the RNN baseline. As a sanity check, we also compare with a version of our learner which has a hardcoded controller (HCC) and a learner which has hardcoded modules (HCF) (in which case the controller is restricted to select windows of 3 with an operator in the middle). All models perform well on the training set. Only our method and its HCC, HCF modifications generalize to the testing and extrapolation set. The RNN requires 10 times more data to generalize to the testing and extrapolation set. For (b, c) we only show accuracy on the expressions with the maximum length of those added so far to the curriculum. "1e3" and "1e4" correspond to the order of magnitude of the number of samples in the dataset, of which 70% are used for training. 10, 50, and 90 percentiles are shown over 6 runs. achieves such high extrapolation accuracy with only sparse supervision, without the step-by-step supervision on execution traces, the stack-based model of execution, and hardcoded transformations. Here we study the effect of varying the number of modules available to our learner. FIG9, 8b highlights a particular pathological choice of modules that causes CRL to overfit. If CRL uses four reducers and zero translators (red), it is not surprising that it fails to generalize to the test set: recall that each source language is only seen with four target languages during training with one held out; each reducer can just learn to reduce to one of the four target languages. What is interesting though is that when we add five translators to the four reducers (blue), we see certain runs achieve 100% generalization, even though CRL need not use the translators at all in order to fit the training set. That the blue training curve is slightly faster than the red offers a possible explanation: it may be harder to find a program where each reducer can reduce any source language to their specialized target language, and easier to find programs that involve steps of re-representation (through these translators), where the solution to a new problem is found merely by re-representing that problem into a problem that learner is more familiar with. The four-reducers-five-translators could have overfitted completely like the four-reducers-zero-translators case, but it consistently does not. We find that when we vary the number of reducers (1 or 3) and the number of translators in (5 or 8) in FIG9, the extrapolation performance is consistent across the choices of different numbers of modules, suggesting that CRL is quite robust to the number of modules in non-pathological cases. Figure 9: Extrapolation Figure 9 shows the extrapolation accuracy from 6 to 100 terms after training on a curriculum from 2 to 5 terms (46200 examples) on the multilingual arithmetic task (Sec. 4.1). The number of possible 100-term problems is (10 100)(3 99)(5 2) = 4.29 · 10 148 and CRL achieves about 60% accuracy on these problems; a random guess would be 10%. length 5 and of (c) length 10. We observe that in many cases the controller chooses to take an additional step to translate the fully reduced answer into an answer in the target language, which shows that it composes together in a novel way knowledge of how to solve a arithmetic problem with knowledge of how to translate between languages. Here are two randomly selected execution traces from the numerical arithmetic extrapolation task (train on 10 terms, extrapolate to 20 terms), where CRL's accuracy hovers around 80%. These expressions are derived from the internal representations of CRL, which are softmax distributions over the vocabulary (except for the first expression, which is one-hot because it is the input). The expressions here show the maximum value for each internal representation. This is a successful execution. The input is 6 * 1 * 3-4+6 * 0 * 0+1-7-3+3+3 * 4+1+1+3+3+6+2+7 and the correct answer is 3. Notice that the order in which controller applies its modules does not strictly follow the order of operations but respects the rules of order of operations: for example, it may decide to perform addition (A) before multiplication (B) if it doesn't affect the final answer. This is an unsuccessful execution trace. The input is 5+6-4+5 * 7 * 3 * 3 * 8 * 0 * 1-4+6-3 * 5 * 3+6-0+0-4-6 and the correct answer is 0. Notice that it tends to follow of order of operations by doing multiplication first, although it does make mistakes (D), which in this case was the reason for its incorrect answer. Note that CRL never receives explicit feedback about its mistakes on what its modules learn to do or the order in which it applies them; it only receives a sparse reward signal at the very end. Although (C) was a calculation mistake, it turns out that it does not matter because the subexpression would be multiplied by 0 anyways.5+6-4+5 * 7 * 3 * 3 * 8 * 0 * 1-4+6-3 * 5 * 3+6-0+0-4-6 # 3 * 8 = 4 5+6-4+5 * 7 * 3 * 4 * 0 * 1-4+6-3 * 5 * 3+6-0+0-4-6 # 0 -4 = 6 5+6-4+5 * 7 * 3 * 4 * 0 * 1-4+6-3 * 5 * 3+6-0+6-6 # 5 * 7 = 5 5+6-4+5 * 3 * 4 * 0 * 1-4+6-3 * 5 * 3+6-0+6-6 # 3 * 4 = 4 (mistake) (C) 5+6-4+5 * 4 * 0 * 1-4+6-3 * 5 * 3+6-0+6-6 # tried to HALT 5+6-4+5 * 4 * 0 * 1-4+6-3 * 5 * 3+6-0+6-6 # 5 * 4 = 0 5+6-4+0 * 0 * 1-4+6-3 * 5 * 3+6-0+6-6 # 6 -6 = 0 5+6-4+0 * 0 * 1-4+6-3 * 5 * 3+6-0+0
We explore the problem of compositional generalization and propose a means for endowing neural network architectures with the ability to compose themselves to solve these problems.
923
scitldr
The information bottleneck method provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label, while minimizing the amount of other, superfluous information in the representation. The original formulation, however, requires labeled data in order to identify which information is superfluous. In this work, we extend this ability to the multi-view unsupervised setting, in which two views of the same underlying entity are provided but the label is unknown. This enables us to identify superfluous information as that which is not shared by both views. A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art on the Sketchy dataset and on label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to traditional unsupervised approaches for representation learning. The goal of deep representation learning is to transform a raw observational input, x, into a representation, z, to extract useful information. Significant progress has been made in deep learning via supervised representation learning, where the labels, y, for the downstream task are known while p(y|x) is learned directly . Due to the cost of acquiring large labeled datasets, a recently renewed focus on unsupervised representation learning seeks to generate representations, z, that allow learning of (a priori unknown) target supervised tasks more efficiently, i.e. with fewer labels . Our work is based on the information bottleneck principle stating that whenever a data representation discards information from the input which is not useful for a given task, it becomes less affected by nuisances, ing in increased robustness for downstream tasks. In the supervised setting, one can directly apply the information bottleneck method by minimizing the mutual information between z and x while simultaneously maximizing the mutual information between z and y . In the unsupervised setting, discarding only superfluous information is more challenging as without labels one cannot directly identify the relevant information. Recent literature (; van den) has instead focused on the InfoMax objective maximizing the mutual information between x and z, I(x, z), instead of minimizing it. In this paper, we extend the information bottleneck method to the unsupervised multi-view setting. To do this, we rely on a basic assumption of the multi-view literature -that each view provides the same task relevant information . Hence, one can improve generalization by discarding from the representation all information which is not shared by both views. We do this through an objective which maximizes the mutual information between the representations of the two views (Multi-View InfoMax) while at the same time reducing the mutual information between each view and its corresponding representation (as with the information bottleneck). The ing representation contains only the information shared by both views, eliminating the effect of independent factors of variations. Our contributions are three-fold: We extend the information bottleneck principle to the unsupervised multi-view setting and provide a rigorous theoretical analysis of its application. We define a new model that empirically leads to state-of-the-art in the low-data setting on two standard multi-view datasets, Sketchy and MIR-Flickr. By exploiting standard data augmentation techniques, we empirically show that the representations obtained with our model in single-view settings are more robust than other popular unsupervised approaches for representation learning, connecting our theory to the choice of augmentation function class. The challenge of representation learning can be formulated as finding a distribution p(z|x) that maps data observations x ∈ X into a code space z ∈ Z. Whenever the end goal involves predicting a label y, we consider only the z that are discriminative enough to identify the label. This requirement can be quantified by considering the amount of target information that remains accessible after encoding the data, and is known in literature as sufficiency of z for y : Definition 1. Sufficiency: A representation z of x is sufficient for y if and only if I(x; y|z) = 0 Any model that has access to a sufficient representation z must be able to predict y at least as accurately as if it has access to the original data x instead. In fact, z is sufficient for y if and only if the amount of information regarding the task is unchanged by the encoding procedure (see Proposition B.1 in the Appendix): I(x; y|z) = 0 ⇐⇒ I(x; y) = I(y; z). Among sufficient representations, the ones that in better generalization for unlabeled data instances are particularly appealing. When x has higher information content than y, some of the information in x must be irrelevant for the prediction task. This can be better understood by subdividing I(x; z) into three components by using the chain rule of mutual information (see Appendix A): I(x; z) = I(x; z|y) Conditional mutual information I(x; z|y) represents the information in z that is not predictive of y, i.e. superfluous information. While I(x; y) is a constant determined by how much label information is accessible from the raw observations; the last term I(x; y|z) represents the amount of information regarding y that is lost by encoding x into z. Note that this last term is zero whenever z is sufficient for x. Since the amount of predictive information I(x; y) is fixed, Proposition 2.1. A sufficient representation z of x for y is minimal whenever I(x; z|y) is minimal. Minimizing the amount of superfluous information can be done directly only in supervised settings. In fact, reducing I(x; z) without violating the sufficiency constraint necessarily requires making some additional assumptions on the predictive task (see Theorem B.1 in the Appendix). In Section 3 we describe a strategy to safely reduce the information content of a representation even when the label y is not observed, by exploiting redundant information in the form of an additional view on the data. Let v 1 and v 2 be two images of the same object from different view-points and y be its label. Assuming that the object is clearly distinguishable from both v 1 and v 2, any representation z with all the information that is accessible from both views would also contain the necessary label information. Furthermore, if z captures only the details that are visible from both pictures, it would reduce the total information content, discarding the view-specific details and reducing the sensitivity of the representation to view-changes. The theory to support this intuition is described in the following where v 1 and v 2 are jointly observed and referred to as data-views. In this section we extend our analysis of sufficiency and robustness to the multi-view setting. Intuitively, we can guarantee that z is sufficient for predicting y even without knowing y by ensuring that z maintains all information which is shared by v 1 and v 2. This intuition relies on a basic assumption of the multi-view environment -that each view provides the same task relevant information. To formalize this we define redundancy. Definition 2. Redundancy: v 1 is redundant with respect to v 2 for y if and only if I(y; v 1 |v 2) = 0 Intuitively, a view v 1 is redundant for a task whenever it is irrelevant for the prediction of y when v 2 is already observed. Whenever v 1 and v 2 are mutually redundant (v 1 is redundant with respect to v 2 for y, and vice-versa), we can show the following: Corollary 1. Let v 1 and v 2 be two mutually redundant views for a target y and let z 1 be a representation of v 1. If z 1 is sufficient for v 2 (I(v 1 ; v 2 |z 1) = 0) then z 1 is as predictive for y as the joint observation of the two views (I(v 1 v 2 ; y) = I(y; z 1)). In other words, whenever it is possible to assume mutual redundancy, any representation which contains all the information shared by both views (the redundant information) is as useful as their joint observation for predicting the label y. By factorizing the mutual information between v 1 and z 1 analogously to Equation 2, we can identify 3 components:. Since I(v 1 ; v 2) is a constant that depends only on the two views and I(v 1 ; v 2 |z 1) must be zero if we want the representation to be sufficient for the label, we conclude that I(v 1 ; z 1) can be reduced by minimizing I(v 1 ; z 1 |v 2). This term intuitively represents the information z 1 contains which is unique to v 1 and not shared by v 2. Since we assumed mutual redundancy between the two views, this information must be irrelevant for the predictive task and, therefore, it can be safely discarded. The proofs and formal assertions for the above statements and Corollary 1 can be found in Appendix B. The less the two views have in common, the more I(z 1 ; v 1) can be reduced without violating sufficiency for the label, the more robust the ing representation. At the extreme, v 1 and v 2 share only label information, in which case we can show that z 1 is minimal for y and our method is identical to the supervised information bottleneck method without needing to access the labels. Conversely, if v 1 and v 2 are identical, then our method degenerates to the InfoMax principle since no information can be safely discarded (see Appendix E). Given v 1 and v 2 that satisfy the mutual redundancy condition for a label y, we would like to define an objective function for the representation z 1 of v 1 that discards as much information as possible without losing any label information. In Section 3.1 we showed that we can maintain sufficiency for y by ensuring that I(v 1 ; v 2 |z 1) = 0, and that decreasing I(z 1 ; v 1 |v 2) will increase the robustness of the representation by discarding irrelevant information. So if we combine these two terms using a relaxed Lagrangian objective, then we obtain: where θ denotes the dependency on the parameters of the encoder p θ (z 1 |v 1), and λ 1 represents the Lagrangian multiplier introduced by the constrained optimization. Symmetrically, we define a Algorithm 1: Figure 1: Visualizing our Multi-View Information Bottleneck model for both multi-view and singleview settings. Whenever p(v 1) and p(v 2) have the same distribution, the two encoders can share their parameters. loss L 2 to optimize the parameters ψ of a conditional distribution p ψ (z 2 |v 2) that defines a robust sufficient representation z 2 of the second view v 2: Although L 1 and L 2 can not be computed directly, by defining z 1 and z 2 on the same domain Z and re-parametrizing the Lagrangian multipliers, their sum can be upper bounded as follows: sufficiency of z1 and z2 for predicting y where D SKL is the symmetrized KL divergence obtained by averaging D KL (p θ (z 1 |v 1)||p ψ (z 2 |v 2)) and D KL (p ψ (z 2 |v 2)||p θ (z 1 |v 1)), while the coefficient β defines the trade-off between sufficiency and robustness of the representation, which is a hyper-parameter in this work. The ing MultiView Infomation Bottleneck (MIB) model (Equation 5) is visualized in Figure 1, while the batchbased computation of the loss function is summarized in Algorithm 1. The symmetrized KL divergence D SKL (p θ (z 1 |v 1)||p ψ (z 2 |v 2)) can be computed directly whenever p θ (z 1 |v 1) and p ψ (z 2 |v 2) have a known density, while the mutual information between the two representations I θψ (z 1 ; z 2) can be maximized by using any sample-based differentiable mutual information lower bound. Both the Jensen-Shannon I JS and the InfoNCE I NCE (van den) estimators used in this work require introducing an auxiliary parameteric model C ξ (z 1, z 2), which is jointly optimized during the training procedure. The full derivation for the MIB loss function can be found in Appendix F. In this section, we introduce a methodology to build mutually redundant views starting from single observations x with domain X by exploiting known symmetries of the task. By picking a class T of functions t: X → W that do not affect label information, it is possible to artificially build views that satisfy mutual redundancy for y with a procedure similar to dataaugmentation. Let t 1 and t 2 be two random variables over T, then v 1:= t 1 (x) and v 2:= t 2 (x) must be mutually redundant for y. Since no function in T affects label information (I(v 1 ; y) = I(v 2 ; y) = I(x; y)), a representation z 1 of v 1 that is sufficient for v 2 must contain same amount of predictive information as x. Formal proofs can be found in Appendix B.4. Whenever the two transformations for the same observations are independent (I(t 1 ; t 2 |x) = 0), they introduce uncorrelated variations in the two views. As an example, if T represents a set of small translations, the two views will differ by a small shift. Since this information is not shared, z 1 that contains only common information between v 1 and v 2 will discard fine-grained details regarding the position. For single-view datasets, we generate the two views v 1 and v 2 by independently sampling two functions from the same function class T with uniform probability. Since the ing t 1 and t 2 have the same distribution, the two generated views will also have the same marginals. For this reason, the two conditional distributions p θ (z 1 |v 1) and p ψ (z 2 |v 2) can share their parameters and only one encoder can be used. Full (or partial) parameter sharing can be also applied in the multiview settings whenever the two views have the same (or similar) marginal distributions. The space of all the possible representations z of x for a predictive task y can be represented as a region in the Information Plane . Each representation is characterised by the amount of information regarding the raw observation I(x; z) and the corresponding measure of accessible predictive information I(y; z) (x and y axis respectively on Figure 2). Ideally, a good representation would be maximally informative about the label while retaining a minimal amount of information from the observations (top left corner of the parallelogram). Further details on the Information Plane and the bounds visualized in Figure 2 are described in Appendix C. Thanks to recent progress in mutual information estimation (; ;), the InfoMax principle has gained attention for unsupervised representation learning (; van den). Since the InfoMax objective involves maximizing I(x; z), the ing representation aims to preserve all the information regarding the raw observations (top right corner in Figure 2). Despite their success, has shown that the effectiveness of the InfoMax models is due to inductive biases introduced by the architecture and estimators rather than the training objective itself, since the InfoMax objective can be trivially maximized by using invertible encoders. On the other hand, Variational Autoencoders (VAEs) define a training objective that balances compression and reconstruction error through an hyper-parameter β. Whenever β is close to 0, the VAE objective aims for a lossless representation, approaching the same region of the Information Plane as the one targeted by InfoMax . When β approaches large values, the representation becomes more compressed, showing increased generalization and disentanglement , and, as β approaches infinity, I(z; x) goes to zero. During this transition from low to high β, however, there are no guarantees that VAEs will retain label information (Theorem B.1 in the Appendix). The path between the two regimes depends on how well the label information aligns with the inductive bias introduced by encoder , prior (and decoder architectures . Concurrent work applies the InfoMax principle in Multi-View settings (; Hénaff et al., 2019; ;), aiming to maximize mutual information between the representation z of a first data-view x and a second one v 2. The target representation for the MultiView InfoMax (MV-InfoMax) models should contain at least the amount of information in x that is predictive for v 2, targeting the region I(z; x) ≥ I(x; v 2) on the Information Plane. Whenever x is redundant with respect to v 2 for y, the representation must be also sufficient for y (Corollary 1). Since z has no incentive in discarding any information regarding x, a representation that is opti- Method mAP@all Prec@200 SaN 0.208 0.292 GN Triplet 0.529 0.716 Siamese CNN 0.481 0.612 Siamese-AlexNet Table 1: Examples of the two views and class label from the Sketchy dataset (on the left) and comparison between MIB and other popular models in literature on the sketch-based image retrieval task (on the right). * denotes models that use a 64-bits binary representation. The for MIB corresponds to β = 1. mal according to the InfoMax principle is also optimal for MV-InfoMax. Our model with β = 0 (Equation 5) belong to this family of objectives since the minimality term is discarded. In contrast to all of the above, our work is the first to explicitly identify and discard superfluous information from the representation in the unsupervised multi-view setting. The idea of discarding irrelevant information was introduced in and identified as one of the possible reasons behind the generalization capabilities of deep neural networks by and. The direct removal of superfluous information has, so far, been done only in supervised settings . Conversely, β-VAE models remove information indiscriminately without identifying which part is superfluous, and the InfoMax and Multi-View InfoMax methods do not explicitly try to remove superfluous information at all. In fact, among the representations that are optimal according to Multi-View InfoMax (purple dotted line in Figure 2), the MIB objective in the representation with the least superfluous information, i.e. the most robust. In this section we demonstrate the effectiveness of our model against state-of-the-art baselines in both the multi-view and single-view setting. In the single-view setting, we also estimate the coordinates on the Information Plane for each of the baseline methods as well as our method to validate the theory in Section 3. The reported in the following sections are obtained using the Jensen-Shannon I JS estimator, which ed in better performance for MIB and the other InfoMax-based models (Table 2 in the supplementary material). In order to facilitate the comparison between the effect of the different loss functions, the same estimator is used across the different models. We compare MIB on the sketch-based image retrieval and Flickr multiclass image classification tasks with domain specific and prior multi-view learning methods. Sketchy The Sketchy dataset consists of 12,500 images and 75,471 handdrawn sketches of objects from 125 classes. As in , we also include another 60,502 images from the ImageNet from the same classes, which in total 73,002 natural object images. As per the experimental protocol of , a total of 6,250 sketches (50 sketches per category) are randomly selected and removed from the training set for testing purpose, which leaves 69,221 sketches for training the model. The sketch-based image retrieval task is a ranking of 73,002 natural images according to the unseen test (query) sketch. Retrieval is done for our model by generating representations for the query sketch as well as all natural images, . The ing flattened 4096-dimensional feature vectors are fed to our image and sketch encoders to produce a 64-dimensional representation. Both encoders consist of neural networks with hidden layers of 2048 and 1024 units respectively. Size of the representation and regularization strength β are tuned on a validation sub-split. We evaluate MIB on five different train/test splits 1 and report mean and standard deviation in Table 5.1. Further details on our training procedure and architecture are in Appendix G. Table 5.1 shows that the our model achieves strong performance for both mean average precision (mAP@all) and precision at 200 (Prec@200), suggesting that the representation is able to capture the common class information between the paired pictures and sketches. The effectiveness of MIB on the retrieval task can be mostly imputed to the regularization introduced with the symmetrized KL divergence between the two encoded views. Other than discarding view-private information, this term actively aligns the representations of v 1 and v 2, making the MIB model especially suitable for retrieval tasks MIR-Flickr The MIR-Flickr dataset consists of 1M images annotated with 800K distinct user tags. Each image is represented by a vector of 3,857 hand-crafted image features (v 1), while the 2,000 most frequent tags are used to produce a 2000-dimensional multihot encoding (v 2) for each picture. The dataset is divided into labeled and unlabeled sets that respectively contain 975K and 25K images, where the labeled set also contains 38 distinct topic classes together with the user tags. Training images with less than two tags are removed, which reduces the total number of training samples to 749,647 pairs . The labeled set contains 5 different splits of train, validation and test sets of size 10K/5K/10K respectively. Following a standard procedure in literature , we train our model on the unlabeled pairs of images and tags. Then a multi-label logistic classifier is trained from the representation of 10K labeled train images to the corresponding macro-categories. The quality of the 1 Processed dataset and splits will be publicly released on paper acceptance 2 These are included only for completeness, as the Multi-View InfoMax objective does not produce consistent representations for the two views so there is no straight-forward way to use it for ranking. representation is assessed based on the performance of the trained logistic classifier on the labeled test set. Each encoder consists of a multi-layer perceptron of 4 hidden layers with ReLU activations learning two 1024-dimensional representations z 1 and z 2 for images v 1 and tags v 2 respectively. Examples of the two views, labels, and further details on the training procedure are in Appendix G. Our MIB model is compared with other popular multi-view learning models in Figure 3 for β = 0 (Multi-View InfoMax), β = 1 and β = 10 −3 (best on validation set). Although the tuned MIB performs similarly to Multi-View InfoMax with a large number of labels, it outperforms it when fewer labels are available. Furthermore, by choosing a larger β the accuracy of our model drastically increases in scarce label regimes, while slightly reducing the accuracy when all the labels are observed (see right side of Figure 3). This effect is likely due to a violation of the mutual redundancy constraint (see Figure 6 in the supplementary material) which can be compensated with smaller values of β for less aggressive compression. A possible reason for the effectiveness of MIB against some of the other baselines may be our ability to use mutual information estimators that do not require reconstruction. Both Multi-View VAE (MVAE) and Deep Variational CCA (VCCA) rely on a reconstruction term to capture crossmodal information, which can introduce bias that decreases performance. In this section, we compare the performance of different unsupervised learning models by measuring their data efficiency and empirically estimating the coordinates of their representation on the Information Plane. Since accurate estimation of mutual information is extremely expensive , we focus on relatively small experiments that aim to uncover the difference between popular approaches for representation learning. The dataset is generated from MNIST by creating the two views, v 1 and v 2, via the application of data augmentation consisting of small affine transformations and independent pixel corruption to each image. These are kept small enough to ensure that label information is not effected. Each pair of views is generated from the same underlying image, so no label information is used in this process (details in Appendix G). To evaluate, we train the encoders using the unlabeled multi-view dataset just described, and then fix the representation model. A logistic regression model is trained using the ing representations along with a subset of labels for the training set, and we report the accuracy of this model on a disjoint test set as is standard for the unsupervised representation learning literature (; ; van den). We estimate I(x; z) and I(y; z) using mutual information estimation networks trained from scratch on the final representations using batches of joint samples {( ∼ p(x, y)p θ (z|x). All models are trained using the same encoder architecture consisting of 2 layers of 1024 hidden units with ReLU activations, ing in 64-dimensional representations. The same data augmentation procedure was also applied for single-view architectures and models were trained for 1 million iterations with batch size B = 64. Figure 4 summarizes the . The empirical measurements of mutual information reported on the Information Plane are consistent with the theoretical analysis reported in Section 4: models that retain less information about the data while maintaining the maximal amount of predictive information, in better classification performance at low-label regimes, confirming the hypothesis that discarding irrelevant information yields robustness and more data-efficient representations. Notably, the MIB model with β = 1 retains almost exclusively label information, hardly decreasing the classification performance when only one label is used for each data point. In this work, we introduce Multi-View Information Bottleneck, a novel method that relies on multiple data-views to produce robust representation for downstream tasks. Most of the multi-view literature operates under the assumption that each view is individually sufficient for determining the label , while our method only requires the weaker mutual redundancy condition outlined in Section 3, enabling it to be applied to any traditional multi-view task. In our experiments, we compared MIB empirically against other approaches in the literature on three such tasks: sketch-based image retrieval, multi-view and unsupervised representation learning. The strong performance obtained in the different areas show that Multi-View Information Bottleneck can be practically applied to various tasks for which the paired observations are either available or are artificially produced. Furthermore, the positive on the MIR-Flickr dataset show that our model can work well in practice even when mutual redundancy holds only approximately. There are multiple extensions that we would like to explore in future work. One interesting direction would be considering more than two views. In Appendix D we discuss why the mutual redundancy condition cannot be trivially extended to more than two views, but we still believe such an extension is possible. Secondly, we believe that exploring the role played by different choices of data augmentation could bridge the gap between the Information Bottleneck principle and with the literature on invariant neural networks , which are able to exploit known symmetries and structure of the data to remove superfluous information. In this section we enumerate some of the properties of mutual information that are used to prove the theorems reported in this work. For any random variables w, x, y and z: (P 1) Positivity: I(x; y) ≥ 0, I(x; y|z) ≥ 0 (P 2) Chain rule: = I(x; y) − I(y; z) Since both I(x; y) and I(y; z) are non-negative (P 1), I(x; y|z) = 0 ⇐⇒ I(y; z) = I(x; y) Theorem B.1. Let x, z and y be random variables with joint distribution p(x, y, z). Let z be a representation of x that satisfies I(x; z) > I(x; z), then it is always possible to find a label y for which z is not predictive for y while z is. (H 1) I(y; z |x) = 0 (T 1) I(x; z) < I(x; z) =⇒ ∃y. I(y; z) > I(y; z) = 0 Proof. By construction. 1. We first factorize x as a function of two independent random variables (Proposition 2.1) by picking y such that: for some deterministic function f. Note that such y always exists. 2. Since x is a function of y and z: (C 4) I(x; z|yz) = 0 Considering I(y; z): = I(y; z|x) + I(x; y; z) Whenever I(x; z) > I(x; z), I(y; z) must be strictly positive, while I(y; z) = 0 by construction. Therefore such y exists. Corollary B.1.1. Let z be a representation of x that discards observational information. There is always a label y for which a z is not predictive, while the original observations are. Hypothesis: Thesis: (T 1) ∃y. I(y; x) > I(y; z) = 0 Proof. By construction using Theorem B.1. 1. Set z = x: Since the hypothesis are met, we conclude that there exist y such that I(y; x) > I(y; z) = 0 Hypothesis: (H 1) I(y; z 1 |v 2 v 1) = 0 Thesis: Proof. Since z 1 is a representation of v 1: Therefore: Proposition B.3. Let v 1 be a redundant view with respect to v 2 for y. Any representation z 1 of v 1 that is sufficient for v 2 is also sufficient for y. (H 1) I(y; z 1 |v 2 v 1) = 0 (H 2) I(y; v 1 |v 2) = 0 Thesis: Proof. Using the from Theorem B.2: (H 1) I(y; z 1 |v 1 v 2) = 0 Thesis: Proof. I(y; z 1) Corollary B.2.1. Let v 1 and v 2 be mutually redundant views for y. Let z 1 be a representation of v 1 that is sufficient for v 2. Then: Hypothesis: Thesis: Proof. Using Theorem B.2 Since I(y; z 1) ≤ I(y; v 1 v 2) is a consequence of the data processing inequality, we conclude that I(y; z 1) = I(y; v 1 v 2) Let x and y be random variables with domain X and Y respectively. Let T be a class of functions t: X → W and let t 1 and t 2 be a random variables over T that depends only on x. For the theorems and corollaries discussed in this section, we are going to consider the independence assumption that can be derived from the graphical model G reported in Figure 5. Figure 5: Visualization of the graphical model G that relates the observations x, label y, functions used for augmentation t 1, t 2 and the representation z 1. Proposition B.4. Whenever I(t 1 (x); y) = I(t 2 (x); y) = I(x; y) the two views t 1 (x) and t 2 (x) must be mutually redundant for y. (H 1) Independence relations determined by G Thesis: (T 1) I(t 1 (x); y) = I(t 2 (x); y) = I(x; y) =⇒ I(t 1 (x); y|t 2 (x)) + I(t 2 (x); y|t 1 (x)) = 0 Proof. (C 1) I(t 1 (x); y|xt 2 (x)) = 0 (C 2) I(y; t 2 (x)|x) = 0 2. Since t 2 (x) is uniquely determined by x and t 2: (C 3) I(t 2 (x); y|xt 2 ) = 0 3. Consider I(t 1 (x); y|t 2 (x)) I(t 1 (x); y|t 2 (x)) (P3) = I(t 1 (x); y|xt 2 (x)) + I(t 1 (x); y; x|t 2 (x)) (C1) = I(t 1 (x); y; x|t 2 (x)) (P3) = I(y; x|t 2 (x)) − I(y; x|t 1 (x)t 2 (x)) (P1) ≤ I(y; x|t 2 (x)) (P3) = I(y; x) − I(y; x; t 2 (x)) (P3) = I(y; x) − I(y; t 2 (x)) + I(y; t 2 (x)|x) (P3) = I(y; x) − I(y; t 2 (x)) + I(y; t 2 (x)|t 2 x) + I(y; t 2 (x); t 2 |x) (C3) = I(y; x) − I(y; t 2 (x)) + I(y; t 2 (x); t 2 |x) (P3) = I(y; x) − I(y; t 2 (x)) + I(y; t 2 (x)|x) − I(y; t 2 (x)|t 2 x) (P1) ≥ I(y; x) − I(y; t 2 (x)) + I(y; t 2 (x)|x) (C2) ≥ I(y; x) − I(y; t 2 (x)) Therefore I(y; x) = I(y; t 2 (x)) =⇒ I(t 1 (x); y|t 2 (x)) = 0 The proof for I(y; x) = I(y; t 1 (x)) =⇒ I(t 2 (x); y|t 1 (x)) = 0 is symmetric, therefore we conclude I(t 1 (x); y) = I(t 2 (x); y) = I(x; y) =⇒ I(t 1 (x); y|t 2 (x)) + I(t 2 (x); y|t 1 (x)) = 0 Theorem B.3. Let I(t 1 (x); y) = I(t 2 (x); y) = I(x; y). Let z 1 be a representation of t 1 (x). If z 1 is sufficient for t 2 (x) then I(x; y) = I(y; z 1). (H 1) Independence relations determined by G (H 2) I(t 1 (x); y) = I(t 2 (x); y) = I(x; y) Thesis: (T 1) I(t 1 (x); t 2 (x)|z 1 ) = 0 =⇒ I(x; y) = I(y; z 1) Proof. Since t 1 (x) is redundant for t 2 (x) (Proposition B.4) any representation z 1 of t 1 (x) that is sufficient for t 2 (x) must also be sufficient for y (Theorem B.2). Using Proposition B.1 we have I(y; z 1) = I(y; t 1 (x)). Since I(y; t 1 (x)) = I(y; x) by hypothesis, we conclude I(x; y) = I(y; z 1) Every representation z of x must satisfy the following constraints: • 0 ≤ I(y; z) ≤ I(x; y): The amount of label information ranges from 0 to the total predictive information accessible from the raw observations I(x; y). • I(y; z) ≤ I(x; z) ≤ I(y; z) + H(x|y): The representation must contain more information about the observations than about the label. When x is discrete, the amount of discarded label information I(x; y) − I(y; z) must be smaller than the amount of discarded observational information H(x) − I(x; z), which implies I(x; z) ≤ I(y; z) + H(x|y). Proof. Since z is a representation of x: Considering the four bounds separately: = H(x|y) + I(y; z) Note that (H 2) is needed only to prove bound 4. For continuous x bounds 1, 2 and 3 still hold. The mutual redundancy condition between two views v 1 and v 2 for a label y can not be trivially extended to an arbitrary number of views, as the relation is not transitive because of some higher order interaction between the different views and the label. This can be shown with a simple example. Given three views v 1, v 2 and v 3 and a task y such that: • v 1 and v 2 are mutually redundant for y • v 2 and v 3 are mutually redundant for y Then, v 1 is not necessarily mutually redundant with respect to v 3 for y. We can show this with a simple example, Let v 1, v 2 and v 3 be fair and independent binary random variables. Defining y as the exclusive or of v 1 and v 3 (y := v 1 XOR v 3), we have that I(v 1 ; y) = I(v 3 ; y) = 0. In this settings, v 1 and v 2 are mutually redundant for y: Analogously, v 2 and v 3 are also mutually redundant for y as the three random variables are not predictive for each other. Nevertheless, v 1 and v 3 and not mutually redundant for y: Where H(v 1 |v 3 y) = H(v 3 |v 1 y) = 0 follows from v 1 = v 3 XOR y and v 3 = v 1 XOR y, while H(v 1) = H(v 3) = 1 holds by construction. This counter-intuitive higher order interaction between multiple views makes our theory non-trivial to generalize to more than two views, requiring an extension of our theory to ensure sufficiency for the label. Different objectives in literature can be seen as a special case of the Multi-View Information Bottleneck principle. In this section we show that the supervised version of Information Bottleneck is equivalent to the corresponding Multi-View version whenever the two redundant views have only label information in common. A second subsection show equivalence between InfoMax and MultiView Information Bottleneck whenever the two views are identical. Whenever the two mutually redundant views v 1 and v 2 have only label information in common (or when one of the two views is the label itself) the Multi-View Information Bottleneck objective is equivalent to the respective supervised version. This can be shown by proving that I(v 1 ; z 1 |v 2) = I(v 1 ; z 1 |y), i.e. a representation z 1 of v 1 that is sufficient and minimal for v 2 is also sufficient and minimal for y. Proposition E.1. Let v 1 and v 2 be mutually redundant views for a label y that share only label information. Then a sufficient representation z 1 of v 1 for v 2 that is minimal for v 2 is also a minimal representation for y. Hypothesis: Thesis: Proof. 1. Consider I(v 1 ; z): = I(v 1 ; z 1 |v 2) + I(v 1 ; y) 2. Using Corollary 1, from (H 2) and (H 3) follows I(v 1 ; y|z 1) = 0 3. I(v 1 ; z) can be alternatively expressed as: = I(v 1 ; z 1 |y) + I(v 1 ; y) Equating 1 and 3, we conclude I(v 1 ; z 1 |v 2) = I(v 1 ; z 1 |y). Whenever v 1 = v 2, a representation z 1 of v 1 that is sufficient for v 1 must contain all the original information. Furthermore since I(v 1 ; z 1 |v 1) = 0 for every representation, no superfluous information can be identified and removed. As a consequence, a minimal sufficient representation z 1 of v 1 for v 1 is any representation for which mutual information is maximal, hence InfoMax. Starting from Equation 3, we consider the sum of the losses L 1 (θ; λ 1) and L 2 (ψ; λ 2) that aim to create the minimal sufficient representations z 1 and z 2 respectively: Considering z 1 and z 2 on the same domain Z, I θ (v 1 ; z 1 |v 2) can be expressed as: Note that the bound is tight whenever p ψ (z 2 |v 2) coincides with p θ (z 1 |v 2). This happens whenever z 1 and z 2 produce a consistent encoding. Analogously I ψ (v 2 ; z 2 |v 1) is upper bounded by D KL (p ψ (z 2 |v 2)||p θ (z 1 |v 1)). I θ (v 1 ; v 2 |z 1) can be rephrased as: * follows from z 2 representation of v 2. The bound reported in this equation is tight whenever z 2 is sufficient for z 1. This happens whenever z 2 contains all the information regarding z 1 (and therefore v 1). Once again, the same bound can symmetrically be used to define I θ (v 1 ; v 2 |z 2) ≤ I(v 1 ; v 2) − I θψ (z 1 ; z 2). Since I(v 1 ; v 2) is constant in θ and ψ, the loss function in Equation 6 can be upper-bounded with; Where: Lastly, multiplying both terms with β:= 2 λ1+λ2 and re-parametrizing the objective, we obtain: G EXPERIMENTAL PROCEDURE AND DETAILS The two stochastic encoders p θ (z 1 |v 1) and p ψ (z 2 |v 2) are modeled by Normal distributions parametrized with neural networks (µ θ, σ 2 θ) and (µ ψ, σ 2 ψ) respectively: Since the density of the two encoders can be evaluated, the symmetrized KL-divergence in equation 4 can be directly computed. On the other hand, I θψ (z 1 ; z 2) requires the use of a mutual information estimator. To facilitate the optimization, the hyper-parameter β is slowly increased during training, starting from a small value ≈ 10 −4 to its final value with an exponential schedule. This is because the mutual information estimator is trained together with the other architectures and, since it starts from a random initialization, it requires an initial warm-up. Starting with bigger β in the encoder collapsing into a fixed representation. The update policy for the hyper-parameter during training has not shown strong influence on the representation, as long as the mutual information estimator network has reached full capacity. All the experiments have been performed using the Adam optimizer with a learning rate of 10 • Input: The two views for the sketch-based classification task consist of 4096 dimensional sketch and image features extracted from two distinct VGG-16 network models which were pre-trained on images and sketches from the TU-Berlin dataset for endto-end classification. The feature extractors are frozen during the training procedure of for the two representations. Each training iteration used batches of size B = 128. • Encoder and Critic architectures: Both sketch and image encoders consist of multi-layer perceptrons of 2 hidden ReLU units of size 2,048 and 1,024 respectively with an output of size 2x64 that parametrizes mean and variance for the two Gaussian posteriors. The critic architecture also consists of a multi layer perceptron of 2 hidden ReLU units of size 512. • β update policy: The initial value of β is set to 10 −4. Starting from the 10,000 th training iteration, the value of β is exponentially increased up to 1.0 during the following 250,000 training iterations. The value of β is then kept fixed to one until the end of the training procedure (500,000 iterations). • Evaluation: All natural images are used as both training sets and retrieval galleries. The 64 dimensional real outputs of sketch and image representation are compared using Euclidean distance. For having a fair comparison other methods that rely on binary hashing , we used Hamming distance on a binarized representation (obtained by applying iterative quantization on our real valued representation). We report the mean average precision (mAP@all) and precision at toprank 200 (Prec@200) on both the real and binary representation to evaluate our method and compare it with prior works. • Input: Whitening is applied to the handcrafted image features. Batches of size B = 128 are used for each update step. • Encoders and Critic architectures: The two encoders consists of a multi layer perceptron of 4 hidden ReLU units of size 1,024, which exactly resemble the architecture used in Figure 6: Examples of pictures v 1, tags v 2 and category labels y for the MIR-Flickr dataset . As visualized is the second row, the tags are not always predictive of the label. For this reason, the mutual redundancy assumption holds only approximately. "watermelon", "hilarious", "chihuahua", "dog" "animals", "dog", "food" "colors", "cores", "centro", "comercial", "building" "clouds", "sky", "structures". Both representations z 1 and z 2 have a size of 1,024, therefore the two architecture output a total of 2x1,024 parameters that define mean and variance of the respective factorized Gaussian posterior. Similarly to the Sketchy experiments, the critic is consists of a multi-layer perceptron of 2 hidden ReLU units of size 512. • β update policy: The initial value of β is set to 10 −8. Starting from 150000 th iteration, β is set to exponentially increase up to 1.0 (and 10 −3) during the following 150,000 iterations. • Evaluation: Once the models are trained on the unlabeled set, the representation of the 25,000 labeled images is computed. The ing vectors are used for training and evaluating a multi-label logistic regression classifier on the respective splits. The optimal parameters (such as β) for our model are chosen based on the performance on the validation set. In Table 3, we report the aggregated mean of the 5 test splits as the final value mean average precision value. • • Encoders, Decoders and Critic architectures: All the encoders used for the MNIST experiments consist of neural networks with two hidden layers of 1,024 units and ReLU activations, producing a 2x64-dimensional parameter vector that is used to parameterize mean and variance for the Gaussian posteriors. The decoders used for the VAE experiments also consist of the networks of the same size. Similarly, the critic architecture used for mutual information estimation consists of two hidden layers of 1,204 units each and ReLU activations. • β update policy: The initial value of β is set to 10 −3, which is increased with an exponential schedule starting from the 50,000 th until 1the 50,000 th iteration. The value of β is then kept constant until the 1,000,000 th iteration. The same annealing policy is used to trained the different β-VAEs reported in this work. • Evaluation: The trained representation are evaluated following the well-known protocol described in Tschannen et al.,000 iterations. The Jensen-Shannon mutual information lower bound is maximized during training, while the numerical estimation are computed using an energy-based bound . The final values for I(x; z) and I(y; z) are computed by averaging the mutual information estimation on the whole dataset. In order to reduce the variance of the estimator, the lowest and highest 5% are removed before averaging. This practical detail makes the estimation more consistent and less susceptible to numerical instabilities. In this section we include additional quantitative and visualizations which refer to the singleview MNIST experiments reported in section 5.2. Table 2: Comparison of the amount of input information I(x; z), label information I(z; y), and accuracy of a linear classifier trained with different amount of labeled Examples (Ex) for the models reported in Figure 4. Both the obtained using the Jensen-Shannon I JSD and the InfoNCE I NCE (van den) estimators are reported. Figure 7 reports the linear projection of the embedding obtained using the MIB model. The latent space appears to roughly consists of ten clusters which corresponds to the different digits. This observation is consistent with the empirical measurement of input and label information I(x; z) ≈ I(z; y) ≈ log 10, and the performance of the linear classifier in scarce label regimes. As the cluster are distinct and concentrated around the respective centroids, 10 labeled examples are sufficient to align the centroid coordinates with the digit labels. H ABLATION STUDIES H.1 DIFFERENT RANGES OF DATA AUGMENTATION Figure 8 visualizes the effect of different ranges of corruption probabily as data augmentation strategy to produce the two views v 1 and v 2. The MV-InfoMax Model does not seem to get any advantage from the use increasing amount of corruption, and it representation remains approximately in the same region of the information plane. On the other hand, the models trained with the MIB objective are able to take advantage of the augmentation to remove irrelevant data information and the representation transitions from the top right corner of the Information Plane (no-augmentation) to the top-left. When the amount of corruption approaches 100%, the mutual redundancy assumption is clearly violated, and the performances of MIB deteriorate. In the initial part of the transitions between the two regimes (which corresponds to extremely low probability of corruption) the MIB models drops some label information that is quickly re-gained when pixel corruption becomes more frequent. We hypothesize that this behavior is due to a problem with the optimization procedure, since the corruption are extremely unlikely, the Monte-Carlo estimation for the symmetrized Kullback-Leibler divergence is more biased. Using more examples of views produced from the same data-point within the same batch could mitigate this issue. The hyper-parameter β (Equation 5) determines the trade-off between sufficiency and minimality of the representation for the second data view. When β is zero, the training objective of MIB is equivalent to the Multi-View InfoMax target, since the representation has no incentive to discard any information. When 0 < β ≤ 1 the sufficiency constrain is enforced, while the superfluous information is gradually removed from the representation. Values of β > 1 can in representations that violate the sufficiency constraint, since the minimization of I(x; z|v 2) is prioritized. The trade-off ing from the choice of different β is visualized in Figure 9 and compared against β-VAE. Note that in each point of the pareto-front the MIB model in a better trade-off between I(x; z) and I(y; z) when compared to β-VAE. The effectiveness of the Multi-View Information Bottleneck model is also justified by the corresponding values of predictive accuracy. Published as a conference paper at ICLR 2020
We extend the information bottleneck method to the unsupervised multiview setting and show state of the art results on standard datasets
924
scitldr
The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists. Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections. We present a simple two-phase learning procedure for fixed point recurrent networks that addresses both these issues. In our model, neurons perform leaky integration and synaptic weights are updated through a local mechanism. Our learning method extends the framework of Equilibrium Propagation to general dynamics, relaxing the requirement of an energy function. As a consequence of this generalization, the algorithm does not compute the true gradient of the objective function, but rather approximates it at a precision which is proven to be directly related to the degree of symmetry of the feedforward and feedback weights. We show experimentally that the intrinsic properties of the system lead to alignment of the feedforward and feedback weights, and that our algorithm optimizes the objective function. Deep learning BID18 is the de-facto standard in areas such as computer vision BID17, speech recognition and machine translation BID3. These applications deal with different types of data and share little in common at first glance. Remarkably, all these models typically rely on the same basic principle: optimization of objective functions using the backpropagation algorithm. Hence the question: does the cortex in the brain implement a mechanism similar to backpropagation, which optimizes objective functions?The backpropagation algorithm used to train neural networks requires a side network for the propagation of error derivatives, which is vastly seen as biologically implausible BID7. One hypothesis, first formulated by , is that error signals in biological networks could be encoded in the temporal derivatives of the neural activity and propagated through the network via the neuronal dynamics itself, without the need for a side network. Neural computation would correspond to both inference and error back-propagation. This work also explores this idea. The framework of Equilibrium Propagation BID29 requires the network dynamics to be derived from an energy function, enabling computation of an exact gradient of an objective function. However, in terms of biological realism, the requirement of symmetric weights between neurons arising from the energy function is not desirable. The work presented here extends this framework to general dynamics, without the need for energy functions, gradient dynamics, or symmetric connections. Our approach is the following. We start from classical models in neuroscience for the dynamics of the neuron's membrane voltage and for the synaptic plasticity (section 3). Unlike in the Hopfield model BID16, we do not assume pairs of neurons to have symmetric connections. We then describe an algorithm for supervised learning based on these models (section 4) with minimal extra assumptions. Our model is based on two phases: at prediction time, no synaptic changes occur, whereas a local update rule becomes effective when the targets are observed. The proposed update mechanism is compatible with spike-timing-dependent plasticity, which supposedly governs synaptic changes in biological neural systems. Finally, we show that the proposed algorithm has the desirable machine learning property of optimizing an objective function (section 5). We show this experimentally (Figure 3) and we provide the beginning for a theoretical explanation. Historically, models based on energy functions and/or gradient dynamics have represented a key subject of neural network research. Their mathematical properties often allow for a simplified analysis, in the sense that there often exists an elegant formula or algorithm for computing the gradient of the objective function BID0 BID24 BID29. However, we argue in this section that 1. due to the energy function, such models are very restrictive in terms of dynamics they can model -for instance the Hopfield model requires symmetric weights, 2. machine learning algorithms do not require computation of the gradient of the objective function, as shown in this work and the work of BID19.In this work, we propose a simple learning algorithm based on few assumptions. To this end, we relax the requirement of the energy function and, at the same time, we give up on computing the gradient of the objective function. We believe that, in order to make progress in biologically plausible machine learning, dynamics more general than gradient dynamics should be studied. As discussed in section 6, another motivation for studying more general dynamics is the possible implementation of machine learning algorithms, such as our model, on analog hardware: analog circuits implement differential equations, which do not generally correspond to gradient dynamics. Most dynamical systems observed in nature cannot be described by gradient dynamics. A gradient field is a very special kind of vector field, precisely because it derives from a primitive scalar function. The existence of a primitive function considerably limits the "number of degrees of freedom" of the vector field and implies important restrictions on the dynamics. In general, a vector field does not derive from a primitive function. In particular, the dynamics of the leaky integrator neuron model studied in this work (Eq. 1) is not a gradient dynamics, unless extra (biologically implausible) assumptions are made, such as exact symmetry of synaptic weights (W ij = W ji) in the case of the Hopfield model. Machine learning relies on the basic principle of optimizing objective functions. Most of the work done in deep learning has focused on optimizing objective functions by gradient descent in the weight space (thanks to backpropagation). Although it is very well known that following the gradient is not necessarily the best option -many optimization methods based on adaptive learning rates for individual parameters have been proposed such as and Adagrad BID9 -almost all proposed optimization methods rely on computing the gradient, even if they do not follow the gradient. In the field of deep learning, "computing the gradient" has almost become synonymous with "optimizing".In fact, in order to optimize a given objective function, not only following the gradient unnecessary, but one does not even need to compute the gradient of that objective function. A weaker sufficient condition is to compute a direction in the parameter space whose scalar product with the gradient is negative, without computing the gradient itself. A major step forward was achieved by BID19. One of the contributions of their work was to dispel the long-held assumption that a learning algorithm should compute the gradient of an objective function in order to be sound. Their algorithm computes a direction in the parameter space that has at first sight little to do with the gradient of the objective function. Yet, their algorithm "learns" in the sense that it optimizes the objective function. By giving up on the idea of computing the gradient of the objective function, a key aspect rendering backpropagation biologically implausible could be fixed, namely the weight transport problem. The work presented here is along the same lines. We give up on the idea of computing the gradient of the objective function, and by doing so, we get rid of the biologically implausible symmetric connections required in the Hopfield model. In this sense, the "weight transport" problem in the backpropagation algorithm appears to be similar, at a high level, to the requirement of symmetric connections in the Hopfield model. We suggest that in order to make progress in biologically plausible machine learning, it might be necessary to move away from computing the true gradients in the weight space. An important theoretical effort to be made is to understand and characterize the dynamics in the weight space that optimize objective functions. The set of such dynamics is of course much larger than the tiny subset of gradient dynamics. We denote by s i the averaged membrane voltage of neuron i across time, which is continuous-valued and plays the role of a state variable for neuron i. We also denote by ρ(s i) the firing rate of neuron i. We suppose that ρ is a deterministic function (nonlinear activation) that maps the averaged voltage s i to the firing rate ρ(s i). The synaptic strength from neuron j to neuron i is denoted by W ij. In biological neurons a classical model for the time evolution of the membrane voltage s i is the rate-based leaky integrator neuron model, in which neurons are seen as performing leaky temporal integration of their past inputs BID8: DISPLAYFORM0 Unlike energy-based models such as the Hopfield model BID16 that assume symmetric connections between neurons, in the model studied here the connections between neurons are not tied. Thus, our model is described by a directed graph, whereas the Hopfield model is best regarded as an undirected graph (Figure 1).(a) The network model studied here is best represented by a directed graph.(b) The Hopfield model is best represented by an undirected graph. Figure 1: From the point of view of biological plausibility, the symmetry of connections in the Hopfield model is a major drawback (1b). The model that we study here is, like a biological neural network, a directed graph (1a).3.2 SPIKE-TIMING DEPENDENT PLASTICITY Spike-Timing Dependent Plasticity (STDP) is considered a key mechanism of synaptic change in biological neurons BID21 BID11 BID22. STDP is often conceived of as a spike-based process which relates the change in the synaptic weight W ij to the timing difference between postsynaptic spikes (in neuron i) and presynaptic spikes (in neuron j) BID5. In fact, both experimental and computational work suggest that postsynaptic voltage, not postsynaptic spiking, is more important for driving LTP (Long Term Potentiation) and LTD (Long Term Depression) BID6 BID20.Similarly, have shown in simulations that a simplified Hebbian update rule based on pre-and post-synaptic activity can functionally reproduce STDP: DISPLAYFORM1 Throughout this paper we will refer to this update rule (Eq. 2) as "STDP-compatible weight change" and propose a machine learning justification for such an update rule. Let s = (s 1, s 2, . . .) be the global state variable and parameter W the matrix of connection weights W ij. We write µ(W, s) the vector whose components are defined as DISPLAYFORM0 defining a vector field over the neurons state space, indicating in which direction each neuron's activity changes: DISPLAYFORM1 Since ρ(s j) = ∂µi ∂Wij (W, s), the weight change Eq. 2 can also be expressed in terms of µ in the form dW ij ∝ ∂µi ∂Wij (W, s)ds i. Note that for all i = i we have ∂µ i ∂Wij = 0 since to each synapse W ij corresponds a unique post-synaptic neuron s i. Hence dW ij ∝ ∂µ ∂Wij (W, s) · ds. We rewrite the STDP-compatible weight change in the more concise form DISPLAYFORM2 The framework and the algorithm in their general forms are described in Appendix A.To illustrate our algorithm, we consider here the supervised setting in which we want to predict an output y given an input x. We describe a simple two-phase learning procedure based on the dynamics Eq. 4 and Eq. 5 for the state and the parameter variables. This algorithm is similar to the one proposed by BID29, but here we do not assume symmetric weights between neurons. Note that similar algorithms have also been proposed by; BID13 or more recently by BID23. Our contribution in this work are theoretical insights into why the proposed algorithm works. In the supervised setting studied here, the units of the network are split in two sets: the inputs x whose values are always clamped, and the dynamically evolving units h (the neurons activity, indicating the state of the network), which themselves include the hidden layers (h 1 and h 2 here) and an output layer (h 0 here), as in Figure 2. In this context the vector field µ is defined by its components µ 0, µ 1 and µ 2 on h 0, h 1 and h 2 respectively, as follows: DISPLAYFORM0 Here the scalar function ρ is applied elementwise to the components of the vectors. The neurons h follow the dynamics DISPLAYFORM1 In this section and the next we use the notation h rather than s for the state variable. The layer h 0 plays the role of the output layer where the prediction is read. The target outputs, denoted by y, have the same dimension as the output layer h 0. The discrepancy between the output units h 0 and the targets y is measured by the quadratic cost function DISPLAYFORM2 Unlike in the continuous Hopfield model, here the feed-forward and feedback weights are not tied, and in general the state dynamics Eq. 9 is not guaranteed to converge to a fixed point. However we observe experimentally that the dynamics almost always converges. We will see in section 5 that, for a whole set of values of the weight matrix W. the dynamics of the neurons h converges. Assuming this condition to hold, the dynamics of the neurons converge to a fixed point which we denote by h 0 (beware not to confuse with the notation for the output units h 0). The prediction h 0 0 is then read out on the output layer and compared to the actual target y. The objective function (for a single training case (x, y)) that we aim to minimize is the cost at the fixed point h 0, which we write DISPLAYFORM3 Note that this objective function is the same as the one proposed by BID1 BID28. Their method to optimize J is to compute the gradient of J thanks to an algorithm which they call "Recurrent Backpropagation". Other methods related to Recurrent Backpropagation exist to compute the gradient of J -in particular the "adjoint method", "implicit differentiation" and "Backprop Through Time". These methods are biologically implausible, as argued in Appendix B.Here our approach to optimize J is to give up on computing the true gradient of J and, instead, we propose a simple algorithm based only on the leaky integrator dynamics (Eq. 4) and the STDPcompatible weight change (Eq. 5). We will show in section 5 that our algorithm computes a proxy for the gradient of J. Also, note that in its general formulation, our algorithm applies to any vector field µ and cost function C (Appendix A) The idea of Equilibrium Propagation BID29 is to see the cost function C (Eq. 10) as an external potential energy for the output units h 0, which can drive them towards their target y. Following the same idea we define the "extended vector field" µ β as DISPLAYFORM0 and we redefine the dynamics of the state variable h as DISPLAYFORM1 The real-valued scalar β ≥ 0 controls whether the output h 0 is pushed towards the target y or not, and by how much. We call β the "influence parameter" or "clamping factor".The differential equation of motion Eq. 13 can be seen as a sum of two "forces" that act on the temporal derivative of the state variable h. Apart from the vector field µ that models the interactions between neurons within the network, an "external force" −β ∂C ∂h is induced by the external potential βC and acts on the output neurons: DISPLAYFORM2 DISPLAYFORM3 The form of Eq. 14 suggests that when β = 0, the output units h 0 are not sensitive to the targets y from the outside world. In this case we say that the network is in the free phase (or first phase). When β > 0, the "external force" drives the output units h 0 towards the target y. When β 0 (small positive value), we say that the network is in the weakly clamped phase (or second phase). Also, note that the case β → ∞, not studied here, would correspond to fully clamped outputs. We propose a simple two-phase learning procedure, similar to the one proposed by BID29. In the first phase of training, the inputs are set (clamped) to the input values. The state variable (all the other neurons) follows the dynamics Eq. 9 (or equivalently Eq. 13 with β = 0) and the output units are free. We call this phase the free phase, as the system relaxes freely towards the free fixed point h 0 without any external constraints on his output neurons. During this phase, the synaptic weights are unchanged. Figure 2: Input x is clamped. Neurons h include "hidden layers" h 2 and h 1, and "output layer" h 0 that corresponds to the layer where the prediction is read. Target y has the same dimension as h 0. The clamping factor β scales the "external force" −β ∂C ∂h that attracts the output h 0 towards the target y. In the second phase, the influence parameter β takes on a small positive value β 0. The state variable follows the dynamics Eq. 13 for that new value of β, and the synaptic weights follow the STDP-compatible weight change Eq. 5. This phase is referred to as the weakly clamped phase. The novel "external force" −β ∂C ∂h in the dynamics Eq. 13 acts on the output units and drives them towards their targets (Eq. 14). This force models the observation of y: it nudges the output units h 0 from their free fixed point value in the direction of their targets. Since this force only acts on the output layer h 0, the other hidden layers (h i with i > 0) are initially at equilibrium at the beginning of the weakly clamped phase. The perturbation caused at the output layer will then propagate backwards along the layers of the network, giving rise to "back-propagating" error signals. The network eventually settles to a new nearby fixed point, corresponding to the new value β 0, termed weakly clamped fixed point and denoted h β. Our model assumes that the STDP-compatible weight change (Eq. 5) occurs during the second phase of training (weakly clamped phase) when the network's state moves from the free fixed point h 0 to the weakly clamped fixed point h β. Normalizing by a factor β and letting β → 0, we get the update rule ∆W ∝ ν(W) for the weights, where ν(W) is the vector defined as DISPLAYFORM0 The vector ν(W) has the same dimension as W. Formally ν is a vector field in the weight space. It is shown in section 5 that ν(W) is a proxy to the gradient ∂J ∂W. The effectiveness of the proposed method is demonstrated through experimental studies (Figure 3). In this section, we attempt to understand why the proposed algorithm is experimentally found to optimize the objective function J (Figure 3). We say that W is a "good parameter" if:1. for any initial state for the neurons, the state dynamics dh dt = µ (W, x, h) converges to a fixed point -a condition required for the algorithm to be correctly defined, 2. the scalar product ∂J ∂W · ν(W) at the point W is negative -a desirable condition for the algorithm to optimize the objective function J.Experiments show that the dynamics of h (almost) always converges to a fixed point and that J consistently decreases (Figure 3). This means that, during training, as the parameter W follows the update rule ∆W ∝ ν(W), all values of W that the network takes are "good parameters". In this section we attempt to explain why. ∂J ∂W AND ν Theorem 1. The gradient of J can be expressed in terms of µ and C as DISPLAYFORM0 Similarly, the vector field ν (Eq. 16) is equal to DISPLAYFORM1 In these expressions, all terms are evaluated at the fixed point h 0. and that the angle between these two vectors is directly linked to the "degree of symmetry" of the Jacobian of µ.An important particular case is the setting of Equilibrium Propagation BID29, in which the vector field µ is a gradient field µ = − ∂E ∂h, meaning that it derives from an energy function E. In this case the Jacobian of µ is symmetric since it is the Hessian of E. Indeed DISPLAYFORM0. Therefore, Theorem 1 shows that ν is also a gradient field, namely the gradient of the objective function J, that is ν = − ∂J ∂W. Note that in this setting the set of "good parameters" is the entire weight space -for all W, the dynamics dh dt = − ∂E ∂h (W, h) converges to an energy minimum, and W converges to a minimum of J since ∆W ∝ − ∂J ∂W. We argue that the set of "good parameters" covers a large proportion of the weight space and that they contain the matrices W that present a form of symmetry or "alignment". In the next subsection, we discuss how this form of symmetry may arise from the learning procedure itself. Figure 3: Example system trained on the MNIST dataset, as described in Appendix C. The objective function is optimized: the training error decreases to 0.00% in around 70 epochs. The generalization error is about 2%. Right: A form of symmetry or alignment arises between feedforward and feedback weights W k,k+1 and W k+1,k in the sense that tr(W k,k+1 · W k+1,k) > 0. This architecture uses 3 hidden layers each of dimension 512. Experiments show that a form of symmetry between feedforward and feedback weights arises from the learning procedure itself (Figure 3). Although the causes for this phenomenon aren't understood very well yet, it is worth pointing out that similar observations have been made in previous work and different settings. A striking example is the following one. A major argument against the plausibility of backpropagation in feedforward nets is the weight transport problem: the signals sent forward in the network and those sent backward use the same connections. BID19 have observed that, in the backward pass, (back)propagating the error signals through fixed random feedback weights (rather than the transpose of the feedforward weights) does not harm learning. Moreover, the learned feedforward weights W k,k+1 tend to'align' with the fixed random feedback weights W k+1,k in the sense that the trace of W k,k+1 · W k+1,k is positive. Denoising autoencoders without tied weights constitute another example of learning algorithms where a form of symmetry in the weights has been observed as learning goes on BID31.The theoretical from BID2 also shows that, in a deep generative model, the transpose of the generative weights perform approximate inference. They show that the symmetric solution minimizes the autoencoder reconstruction error between two successive layers of rectifying linear units. Our approach provides a basis for implementing machine learning models in continuous-time systems, while requirements regarding the actual dynamics are reduced to a minimum. This means that the model applies to a large class of physical realizations of vectorfield dynamics, including analog electronic circuits. Implementations of recurrent networks based on analog electronics have been proposed in the past, e.g. BID13, however, these models typically required circuits and associated dynamics to adhere to an exact theoretical model. With our framework, we provide a way of implementing a learning system on a physical substrate without even knowing the exact dynamics or microscopic mechanisms that give rise to it. Thus, this approach can be used to analog electronic system end-to-end, without having to worry about exact device parameters and inaccuracies, which inevitably exist in any physical system. Instead of approximately implementing idealized computations, the actual analog circuit, with all its individual device variations, is trained to perform the task of interest. Thereby, the more direct implementation of the dynamics might in advantages in terms of speed, power, and scalability, as compared to digital approaches. Our model demonstrates that biologically plausible learning in neural networks can be achieved with relatively few assumptions. As a key contribution, in contrast to energy-based approaches such as the Hopfield model, we do not impose any symmetry constraints on the neural connections. Our algorithm assumes two phases, the difference between them being whether synaptic changes occur or not. Although this assumption begs for an explanation, neurophysiological findings suggest that phase-dependent mechanisms are involved in learning and memory consolidation in biological systems. Theta waves, for instance, generate neural oscillatory patterns that can modulate the learning rule or the computation carried out by the network BID26. Furthermore, synaptic plasticity, and neural dynamics in general, are known to be modulated by inhibitory neurons and dopamine release, depending on the presence or absence of a target signal. BID10; BID27.In its general formulation (Appendix A), the work presented in this paper is an extension of the framework of BID29 to general dynamics. This is achieved by relaxing the requirement of an energy function. This generalization comes at the cost of not being able to compute the (true) gradient of the objective function but, rather a direction in the weight space which is related to it. Thereby, precision of the approximation of the gradient is directly related to the "alignment" between feedforward and feedback weights. Even though the exact underlying mechanism is not fully understood yet, we observe experimentally that during training the weights symmetrize to some extent, as has been observed previously in a variety of other settings BID19 BID31 BID2. Our work shows that optimization of an objective function can be achieved without ever computing the (true) gradient. More thorough theoretical analysis needs to be carried out to understand and characterize the dynamics in the weight space that optimize objective functions. Naturally, the set of all such dynamics is much larger than the tiny subset of gradient-based dynamics. Our framework provides a means of implementing learning in a variety of physical substrates, whose precise dynamics might not even be known exactly, but which simply have to be in the set of sup-ported dynamics. In particular, this applies to analog electronic circuits, potentially leading to faster, more efficient, and more compact implementations. In this Appendix, we present the framework and the algorithm in their general formulations and we prove the theoretical . We consider a physical system specified by a state variable s and a parameter variable θ. The system is also influenced by an external input v, e.g. in the supervised setting v = (x, y) where y is the target that the system wants to predict given x. Let s → µ(θ, v, s) be a vector field in the state space and C(θ, v, s) a cost function. We assume that the state dynamics induced by µ converges to a stable fixed point s 0 θ,v, corresponding to the "prediction" from the model and characterized by DISPLAYFORM0 The objective function that we want to optimize is the cost at the fixed point DISPLAYFORM1 Note the distinction between J and C: the cost function is defined for any state s whereas the objective function is the cost at the fixed point. The training objective (for a single data sample v) is find arg min DISPLAYFORM2 Equivalently, the training objective can be reformulated as a constrained optimization problem: DISPLAYFORM3 where the constraint µ (θ, v, s) = 0 is the fixed point condition. All traditional methods to compute the gradient of J (adjoint method, implicit differentiation, Recurrent Backpropagation and Backpropagation Through Time or BPTT) are thought to be biologically implausible. Our approach is to give up on computing the gradient of J and let the parameter variable θ follow a vector field ν in the parameter space which is "close" to the gradient of J.Before defining ν we first introduce the "extended vector field" DISPLAYFORM4 where β is a real-valued scalar called "influence parameter". Then we extend the notion of fixed point for any value of β. For any β we define the β-fixed point s β θ,v such that DISPLAYFORM5 Under mild regularity conditions on µ and C, the implicit function theorem ensures that, for a fixed data sample v, the funtion (θ, β) → s β θ,v is differentiable. Now for every θ and v we define the vector ν(θ, v) in the parameter space as DISPLAYFORM6 As shown in section 4, the second term on the right hand side can be estimated in a biologically realistic way thanks to a two-phase training procedure. and ∂s DISPLAYFORM7 Proof of Lemma 2. First we differentiate the fixed point equation Eq. 27 with respect to θ: DISPLAYFORM8 Rearranging the terms we get Eq. 28. Similarly we differentiate the fixed point equation Eq. 27 with respect to β: DISPLAYFORM9 Rearranging the terms we get Eq. 29.Theorem 3. The gradient of the objective function is equal to DISPLAYFORM10 and the vector field ν is equal to DISPLAYFORM11 All the factors on the right-hand sides of Eq. 32 and Eq. 33 are evaluated at the fixed point s 0 θ.Proof of Theorem 3. Let us compute the gradient of the objective function with respect to θ. Using the chain rule of differentiation we get DISPLAYFORM12 Hence Eq. 32 follows from Eq. 28 evaluated at β = 0. Similarly, the expression for the vector field ν (Eq. 33) follows from its definition (Eq. 26), the identity Eq. 29 evaluated at β = 0 and the fact that We finish by stating and proving a last . Consider the setting introduced in section 4 with the quadratic cost function C = 1 2 y − h 0 2. In the weakly clamped phase, the "external influence" −β (y − h 0) added to the vector field µ (with β 0) slightly attracts the output state h 0 to the target y. It is intuitively clear that the weakly clamped fixed point is better than the free fixed point in terms of prediction error. Proposition 5 below generalizes this property to any vector field µ and any cost function C.Proposition 4. Let s 0 be a stable fixed point of the vector field s → µ(s), in the sense that s − s 0 · µ (s) < 0 for s in the neighborhood of s 0 (i.e. the vector field at s points towards s 0). Then the Jacobian of µ at the fixed point ∂µ ∂s s 0 is negative, in the sense that DISPLAYFORM13 Proof. Let v be a vector in the state space, α > 0 a positive scalar and let s:= s 0 + αv. For α small enough, the vector s is in the region of stability of s 0. Using a first order Taylor expansion and the fixed point condition µ s 0 = 0 we get DISPLAYFORM14 as α → 0. Hence the . The following proposition shows that, unless the free fixed point s 0 θ,v is already optimal in terms of cost value, for β > 0 small enough, the nudged fixed point s β θ,v achieves lower cost value than the free fixed point. Thus, a small perturbation due to a small increment of β nudges the network towards a configuration that reduces the cost value. The inequality holds because B ADJOINT METHOD AND RELATED ALGORITHMS Earlier work have proposed various methods to compute the gradient of the objective function J (Eq. 20). One common method is the "adjoint method". In the context of fixed point recurrent neural networks studied here, the adjoint method leads to Backpropagation Through Time (BPTT) and "Recurrent Backpropagation" BID1 BID28. BPTT is the method of choice today for deep learning but its biological implausibility is obvious -it requires the network to store all its past states for the propagation of error derivatives in the second phase. Recurrent Backpropagation corresponds to a special case of BPTT where the network is initialized at the fixed point. This algorithm does not need to store the past states of the network (the state at the fixed point suffices) but it still requires neurons to send a different kind of signals through a different computational path in the second phase, which seems therefore less biologically plausible than our algorithm. Our approach is to give up on the idea of computing the gradient of the objective function. Instead we show that the STDP-compatible weight change computes a proxy to the gradient in a more biologically plausible way. For completeness, we state and prove a continuous-time version of Backpropagation Through Time and Recurrent Backpropagation. The formulas for the propagation of error derivatives (Theorem 6 and Corollary 7) will make it obvious that our algorithm is more biologically plausible than both of these algorithms. To keep notations simple, we omit to write dependences in the data sample v. We consider the dynamics ds dt = µ(θ, s) and denote by s t the state of the system at time t ≥ 0 when it starts from an initial state s 0 at time t = 0. Note that s t converges to the fixed point s 0 θ as t → ∞. We then define a family of objective functions L(θ, s 0, T):= C (θ, s T) for every couple (s 0, T) of initial state s 0 and duration T ≥ 0. This is the cost of the state at time t = T when the network starts from the state s 0 at time t = 0. Note that L(θ, s 0, T) tends to J(θ) as T → ∞ (Eq. 20).We want to compute the gradient ∂L ∂θ (θ, s 0, T) as T → ∞. For that purpose, we fix T to a large value and we consider the following quantity ∂L ∂s T −t:= ∂L ∂s (θ, s T −t, t),which represents the "partial derivative of the cost with respect to the state at time T − t". In other words this is the "partial derivative of the cost with respect to the (T − t)-th hidden layer" if one regards the network as unfolded in time (though time is continuous here). The formulas in Theorem 6 below correspond to a continuous-time version of BPTT for the propagation of the partial derivatives Computing ∂L ∂s T −t h i cannot reach values above 1. As a consequence h i must remain in the domain 0 ≤ h i ≤ 1. Therefore, rather than the standard gradient descent (Eq. 58), we will use a slightly different update rule for the state variable h: DISPLAYFORM0 This little implementation detail turns out to be very important: if the i-th hidden unit was in some state h i < 0, then Eq. 58 would give the update rule h i ← (1 −)h i, which would imply again h i < 0 at the next time step (assuming < 1). As a consequence h i would remain in the negative range forever. We use different learning rates for the different layers in our experiments. We do not have a clear explanation for why this improves performance, but we believe that this is due to the finite precision with which we approach the fixed points. The hyperparameters chosen for each model are shown in Table 1 and the are shown in Figure 3. We initialize the weights according to the Glorot-Bengio initialization BID12. Table 1: Hyperparameters. for both the 2 and 3 layered MNIST. Example system trained on the MNIST dataset, as described in Appendix C. The objective function is optimized: the training error decreases to 0.00%. The generalization error lies between 2% and 3% depending on the architecture. The learning rate is used for iterative inference (Eq. 59). β is the value of the clamping factor in the second phase. α k is the learning rate for updating the parameters in layer k. We were also able to train on MNIST using a Convolutional Neural Network (CNN). We got around 2% generalization error. The hyperparameters chosen to train this Convolutional Neural Network are shown in TAB0.
We describe a biologically plausible learning algorithm for fixed point recurrent networks without tied weights
925
scitldr
In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images. In contrast, in our approach the image gen- eration is split into 2 stages. In the first stage a generator network outputs 3D ob- jects. In the second, a differentiable renderer produces an image of the 3D object from a random viewpoint. The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint. Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint. In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique. We provide analysis of our learning approach, expose its ambiguities and show how to over- come them. Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset. Generative Adversarial Nets (GAN) (see) have become the gold standard generative model over the years. Their capability is demonstrated in many data sets of natural images. These generative models can create sample images that are nearly indistinguishable from real ones. GANs do not need to make assumptions on the data formation other than applying a neural network on a latent noise input. They can also operate in a fully unsupervised way. GANs have strong theoretical foundations since the beginning. demonstrate that, under suitable conditions, GANs learn to generate samples with the same probability distribution as samples in the training data set have. However, there is one notable drawback of GANs compared to classical generative models like Gaussian Mixture Models by or Naive Bayes classifiers by. Classical models are interpretable, but GANs are not. Interpretability is achieved in classical models by making strong assumptions on the data formation process and by using them on interpretable engineered features. Our work combines the best of both worlds for 3D shape learning. We keep the advantages of GANs: unsupervised training, applicability on real datasets without feature engineering, the theoretical guarantees and simplicity. We also make our representation interpretable, as our generator network provides a 3D mesh as an output. To make the learning possible, we make the assumption that natural images are formed by a differentiable renderer. This renderer produces fake images by taking the 3D mesh, its texture, a image and a viewpoint as its input. During training a discriminator network is trained against the generator in an adversarial fashion. It tries to classify whether its input is fake or comes from the training dataset. The key observation is that a valid 3D object should look realistic from multiple viewpoints. This image realism is thus enforced by the GAN training. This idea was first applied for generating 3D shapes in. Their pioneering work had several limitations though. It was only applicable to synthetic images, where the mask is available and produced only black and white images. Our method works on natural images and we do not need silhouettes as supervision signal. From a theoretical point of view, image realism means that the images lie inside the support of the probability distribution of natural images (see , where image realism was used for super-resolution). AmbientGAN of proves that one can recover an underlying latent data distribution, when a known image formation function is applied to the data, thus achieving data realism by enforcing image realism. Our work and that of are special cases of AmbientGAN for 3D shape learning. However this task in general does not satisfy all assumptions in , which in ambiguities in the training. We resolve these issues in our paper by using suitable priors. We summarize our contributions below: • For the first time, to the best of our knowledge, we provide a procedure to build a generative model that learns explicit 3D representations in an unsupervised way from natural images. We achieve that using a generator network and a renderer trained against a discriminator in a GAN setting. Samples from our model are shown in Figure 1. • We introduce a novel differentiable renderer, which is a fundamental component to obtain a high-quality generative model. Notably, it is differentiable with respect to the 3D vertex coordinates. The gradients are not approximated, they can be computed exactly even at the object boundaries and in the presence of self-occlusions. • We analyze our learning setup in terms of the ambiguities in the learning task. These ambiguities might derail the training to undesirable . We show that these problems can only be solved when one uses labels or prior knowledge on the data distribution. Finally, we provide practical solutions to overcome the problems that originate from the ambiguities. Before we discuss prior work, we would like to state clearly what we mean by supervised, unsupervised and weakly supervised learning in this paper. Usually, supervised learning is understood as using annotated data for training, where the annotation was provided by human experts. In some scenarios the annotation comes from the image acquisition setup. Often these approaches are considered unsupervised, because the annotation is not produced by humans. Throughout our paper we consider supervision based on the objective function and its optimization, and not based on the origin of the annotation. To that purpose, we use the notion of the target and training objective. The target objective is defined as the function that measures the performance of the trained model f, i.e., where l t is the loss function, x, y are the data and labels respectively. The training objective is the function that is optimized during the training, They are defined for the supervised, weakly supervised and unsupervised case as where y w denotes a subset of labels and the loss functions l s, l w and l u may be different from l t. In the case of most supervised tasks (e.g. in classification) the target is the same as the training objective (cross-entropy). Another example is monocular depth estimation. In this case, the inputs are monocular images and the model can be trained using stereo images. This makes it a weakly supervised method under the definitions above. The GAN training is unsupervised, it has the same target and training objectives (the Jensen-Shannon divergence), and thus even the target objective does not use labels. In Table 1, we show the most relevant prior work with a detailed list of used supervision signals. There, we consider the full training scenario from beginning to end. For example if a method uses a pre-trained network from another previous work in its setup, we consider it supervised if the pre-trained network used additional annotation during its training. A very successful 3D generative model is the Basel face model introduced by , which models the 3D shape of faces as a linear combination of base shapes. To create it, classical 3D reconstruction techniques (see) and laser scans were used. This model is used in several methods (e.g., and autoencoders (e.g.) that learn 3D representations by directly using 3D as the supervision signal. The most relevant papers similar to our work are the ones that use differentiable rendering and using randomly sampled viewpoints to enforce image realism. There are GAN based methods by;; and Variational autoencoder based methods by;;. However, Figure 2: Illustration of the training setup. G and D are the generator and discriminator neural networks. R is the differentiable renderer and it has no trainable parameters. The random variables z, m and v are the latent vector, 3D object and the viewpoint parameters. The fake images are x f and the real images are x r. these methods are only applicable on synthetic data or use weak supervision for training as shown in Table 1. Our method can also be interpreted as a way to disentangle the 3D and the viewpoint factors from images in an unsupervised manner. used image triplets for the task. They utilized an autoencoder to reconstruct an image from the mixed latent encodings of other two images. and only use image pairs that share the viewpoint attribute, thus reducing part of the supervision in the GAN training. and use mixing latent variables for unsupervised disentangling. HoloGAN is a method for disentangling objects and viewpoints using latent shape representations. In contrast we learn an explicite 3D mesh representation, which can be used in traditional rendering pipelines. By using this rendering in the training, we demonstrate the disentangling of the 3D shape from the viewpoint without any labels and also guarantee interpretability and consistency across viewpoints. An important component of our model is the renderer. Differentiable renderers like Neural mesh renderer Kato et al. or OpenDR Loper & Black (2014 . have been used along with neural networks for shape learning tasks. Differentiability is essential to make use of the gradient descent algorithms commonly employed in the training of neural networks. We introduce a novel renderer, where the gradients are exactly computed and not approximated at the object boundaries. We are interested in building a mapping from a random vector to a 3D object (texture and vertex coordinates), and a image. We call these three components the scene representation. To generate a view of this scene we also need to specify a viewpoint, which we call the camera representation. The combination of the scene and camera representations is also referred to as simply the representation, and it is used by a differential renderer R to construct an image. We train a generator G in an adversarial fashion against a discriminator D by feeding zero-mean Gaussian samples z as input (see Fig. 2). The objective of the generator is to map Gaussian samples to scene representations m that in realistic renderings x f for the viewpoint v used during training. The discriminator then receives the fake x f and real x r images as inputs. The GAN training solves the following optimization problem, where x f = R(G(z), v) are the generated fake images, m = G(z) are the 3D shape representations and x r are the real data samples. The renderer R is a fixed function, i.e., without trainable parameters, but differentiable. The viewpoints v are randomly sampled from a known viewpoint distribution. In practice, G and D are neural networks and the optimization is done using a variant of stochastic gradient descent (SGD). In this section we give a theoretical analysis of our method and describe assumptions that are needed to make the training of the generator succeed. We build on the theory of and examine its assumptions in the 3D shape learning task. The images in the dataset x r = R(m r, v r) are formed by the differentiable rendering function R given the 3D representation m r ∼ p m and viewpoint v r ∼ p v. Here p m and p v are the "true" probability density functions of 3D scenes and viewpoints. This assumption is needed to make sure that an optimal generator exists. If some real images cannot be generated by the renderer then the generator can never learn the corresponding model. We can safely assume we have a powerful enough renderer for the task. Note that this does not mean the real data has to be synthetically rendered with the specific renderer R. Assumption 2 The ground truth viewpoint v r ∼ p v and the 3D scenes m r ∼ p m are independent random variables. The distribution of the 3D representations p m is not known (it will be learned). The viewpoint distribution p v is known, and we can sample from it, but the viewpoint v r is not known for any specific data sample x r. This assumption is satisfied unless images in the dataset are subject to some capture bias. For example, this would be the case for celebrities that have a "preferred" side of their face when posing for pictures. More technically, this assumption allows us to randomly sample v viewpoints independently from the generated m models. Assumption 3 Given the image formation model This assumption is necessary for learning the ground truth p m. In the unsupervised learning task we can only measure the success of the learning algorithm on the output images during the training. If multiple distributions can induce the same data x ∼ p x, there is no way for any learning algorithm to choose between them, unless prior knowledge on p m (in practice it is an added constraint on m in the optimization) is available. The 3D shape learning task in general is ambiguous. For example in the case of the hollow-mask illusion (see) people are fooled by an inverted depth image. Many of these ambiguities depend on the parametrization of the 3D representation. For example different meshes can reproduce the same depth image with different triangle configurations, thus a triangle mesh is more ambiguous than a depth map. However, if the aim is to reconstruct the depth of an object, the ambiguities arising from the mesh representation do not cause an ambiguity in the depth. We call these acceptable ambiguities. For natural images one has to model the whole 3D scene, otherwise there is a trivial ambiguity when a static is used. The generator can simply move the object out of the camera field of view and generate the images on the like in the GAN training to generate natural images. Modelling the whole scene with a triangle mesh however is problematic in practice because of the large size of the (multiple) meshes that would need. We propose a compromise, where we only model the object with the mesh and we generate a large and crop a portion of it randomly during training. In this way, even if the generator outputs images with a view of the object, there is no guarantee that the crop will still contain the object. Thus, the generator will not be able to match the statistics of real data unless it produces a realistic 3D object. The generator G and discriminator D have large enough capacity, and the training reaches the global optimum of the GAN training 5. In practice, neural networks have finite capacity and we train them with a local iterative minimization solver (stochastic gradient descent). Hence, the global optimum might not be achieved. Nonetheless we show that the procedure works in practice. Now we show that under these conditions the generator can learn the 3D geometry of the scene faithfully. Theorem 1 When the above assumptions are satisfied, the generated scene representation distribution is identical to the real one, thus G(z) ∼ p m, with z ∼ N (0, I). The proof can be readily adapted from. Differentiability is essential for functions used in neural network training. Unfortunately traditional polygon renderers are only differentiable with respect to the texture values but not for the 3D vertex coordinates. When the mesh is shifted by a small amount, the rendered pixel can shift to the or to another triangle in the mesh. This can cause problems during training. Thus, we propose a novel renderer that is differentiable with respect to the 3D vertex coordinates. We make the renderer differentiable by extending the triangles at their boundaries with a fixed amount in pixel space. Then, we blend the extension against the or against an occluded triangle. The rendering is done in two stages. First, we render an image with a traditional renderer, which we call the crisp image. Second, we render the triangle extensions and its alpha map, which we call the soft image. Finally we blend the crisp and the soft images. We render the crisp image using the barycentric coordinates. Let us define the distance of a 2D pixel coordinate p to a triangle T as d(p, T) = min pt∈T d(p, p t) and its closest point in the triangle as p * (T) = arg min pt∈T d(p, p t), where d is the Euclidean distance. For each pixel and triangle the rendered depth, attribute and alpha map can be computed as where z c indicates the depth of the crisp layer, b i are the barycentric coordinates and z i (T) are the depth values of the triangle vertices and z f ar is a large number representing infinity. The vertex attributes of a triangle T are a i (T). The closest triangle index is computed as T * c (p) = arg min T z c (p, T), and it determines which triangle is rendered for the attribute a c (p) in the crisp image. The soft image is computed as where B is the width of the extension around the triangle and λ slope > 0. The final image a(p) is computed as where bcg is the crop. UV mapping is supported as well: first the UV coordinates are rendered for the crisp and soft image. Then the colors are sampled from the texture map for the soft and the crisp image separately. Finally, the soft and the crisp images are blended. Figure 3 shows an illustration of the blended rendering as well as its effect on the training. The 3D representation m = [s, t, b] consists of three parts, where s denotes the 3D shape of the object, t are the texture and b are the colour values. The shape s is a 3 dimensional array with the size of 3×N ×N. We call this the shape image, where each pixel corresponds to a vertex in our 3D mesh and the pixel value determines the 3D coordinates of the vertex. The triangles are defined as the subset of the regular triangular mesh on the N × N grid; we only keep the triangles of the middle circular region. The texture (image) is an array of the size 3 × N t × N t. The renderer uses the UV mapping technique, so the size of the texture image can have higher resolution than the shape image. In practice we choose N t = 2N, so the triangles are roughly 1 or 2 pixels wide and the texture can match the image resolution when rendering a N × N image. The is a color image of size 3 × 2N × 2N. The renderer uses a perspective camera model, where the camera is pointing at the origin and placed along the Z axis such that the field of view is set so the unit ball fits tightly in the rendered image. The viewpoint change is interpreted as rotating the object in 3D space, while the camera stays still. Finally a random N × N section of the renderer is cropped and put behind the object. Notice that the 3D representation (3 dimensional arrays) are a perfect match for convolutional neural network generators. We designed it this way, so we can use StyleGAN as generator (see). We use the StyleGAN generator of with almost vanilla settings to generate the shape image, texture image and the . StyleGAN consist of two networks: an 8 layer fully connected mapping network that produces a style vector from the latent inputs, and a synthesis network that consist of convolutional and upsampling layers that produces the output image. The input of the synthesis network is constant and the activations at each layer are perturbed by the style vector using adaptive instance normalization. For each layer activation also noise is added. It is also possible to mix styles from different latent vectors, by perturbing the convolutional layer activations with different styles at each layer. In our work we used the default setting for style mixing and for most parameters. We modified the number of output channels and the resolution and learning rates. The training was done in a progressively growing fashion, starting at the an image resolution of N = 16. The final resolution was set to N = 128. One StyleGAN instance (G o) generates the shape image and texture and another (G b) generates the . The inputs to both generators are 512 dimensional latent vectors z o and z b. We sampled them independently, assuming the and the object are independent. We set both G o and G b to produce images at 2N × 2N resolution where N is the rendered image size. The output of the object generator is then sliced into the shape and texture image, then the shape image is downsampled by a factor of 2. We multiplied the shape image by 0.002, which effectively sets a relative learning rate, then added s 0 to the output as an initial shape. For faces we set s 0 to a sphere with a radius r = 0.5 and centered at the origin. For buses and cars s 0 was a flat sheet. We noticed that during the training the generation of shapes would not easily recover from bad local minima, which ed in high frequency artifacts and the hollow-mask ambiguity. Thus we use a shape image pyramid to tackle this problem. The generator is set to produce K = 4 shape images, then these images are blurred with varying amounts and summed: where blur(·) is the Gaussian blur and σ is interpreted in pixels on the shape image. We also noticed that the 3D models of the object tended to grow large and tried to model the . This is the of an acceptable ambiguity in the parametrization. In terms of the GAN objective it does not matter if the is modelled by b or s and t. As we are interested in where the object fits in the image, we added a size constraint on the object. The output coordinates are computed as where we set s max = 1.3 and the L 2 norm and tanh functions are interpreted pixel-wise on the shape image s. The effect of both the shape image pyramid and the size constraint can be seen in Figure 4. The discriminator architecture was also taken from StyleGAN and we trained it with default settings. SOFT SIZE FULL VIEW CAR BUS Figure 4: Samples from our methods. CRISP used the crisp renderer, while SOFT uses our proposed renderer. SIZE adds a size constraint in order to prevent the mesh modelling the . FULL is our final model that adds the shape pyramid parametrization. CAR and BUS uses the same settings as FULL except it is initialized with a flat sheet instead of a sphere and we did not add size contraint. We trained our model on the FFHQ faces and on LSUN cars and buses. FFHQ contains 70k high resolution (1024 × 1024) colour images. We resized the images to 128 × 128 and trained our generative model on all images in the dataset. For the viewpoint we found that randomly rotating in the range of ±45 degrees along the vertical and ±15 degrees along the horizontal axis yielded the best . This is not surprising, as most faces in the dataset are close to frontal. For our full model we used pyramid shapes of 4 levels and size constraint of 1.3 and we trained our model for 5M iterations. The are shown on Figure 1 and more samples can be found in the Appendix. We also trained our model on 100k images from each of the LSUN categories. We used the same settings as for FFHQ except for the initialization of the 3D shape and without a size constraint. In Figure 4 we show ablations of the choices of the renderer and network architecture settings. We can see that our soft renderer has a large impact on the training. The crisp renderer cannot learn the shape. Furthermore we can see that the size constraint prevents the mesh to model the , and the shape pyramid reduces the artifacts and folds on the face. We can also see that it is important that we set the viewpoint distribution accurately. With the a large viewpoint range of ±120 degrees our method struggles to learn. It produces self-intersecting meshes and puts multiple faces on the object. We can see that our method can also learn the 3D of other categories, such as cars and buses. Table 2 explains the options used in detail and shows quantitative . Figure 5 shows on interpolated latent vectors. We can see the viewpoint and the identity is disentangled and the transition is smooth. Here, we would like to acknowledge the limitations of our work: • Currently, we use a triangle mesh with fixed topology. In general, this is not sufficient for modeling challenging objects and scenes of arbitrary topology. • The is currently not modeled as part of the triangle mesh, which limits our method for datasets where the object is found at the center of the image. Note that this limitation is the of the specific parametrization and the architecture of the generator and not of the expressive power of our method in general. • The imaging model is currently a Lambertian surface illuminated with ambient light. However, specularity and directional light can be added to the renderer model. This is a relatively simple extension, but the representation of lights as random variables during training needs extensive experimental evaluation. One might claim that our work does use supervision, as the faces FFHQ dataset were carefully aligned to the center of the images. We argue that our method still does not use any explicit supervision signal other than the images themselves. Moreover, as discussed in the limitation section, the centering of the object will be irrelevant when the and the object share the same 3D mesh and texture. In contrast, methods that use annotation cannot be extended to deal with more challenging datasets, where that annotation is not available. Another point is the motivation to generate faces in an unsupervised manner, since there already exist several data sets with lots of annotation. First, we choose FFHQ because it is a very clean dataset and our intention is to demonstrate that unsupervised 3D shape learning is possible. Second, we believe that unsupervised learning is the right thing to do even if annotation is available. Unsupervised methods can be extended to other datasets where that annotation is not available. In , we provide a solution to the challenging and fundamental problem of building a generative model of 3D shapes in an unsupervised way. We explore the ambiguities present in this task and provide remedies for them. Our analysis highlights the limitations of our approach and sets the direction for future work. A SAMPLES Figure 6: Samples from our generator trained on the FFHQ dataset at 128 × 128 resolution. The first column shows random rendered samples. The other columns show the 3D normal map, texture, and textured 3D shapes for 5 canonical viewpoints in the range of ±90 degrees. The images were picked to illustrate the range of quality achieved by our method. We can see that most of the samples have anatomically correct shapes. In some cases there are exaggerated features like a large chin, that is only apparent from the profile view. Faces from those viewpoints are not present in the dataset, which might explain the shortcomings of our method. There are failure cases as well (the bottom row), that look realistic from the frontal view, but do not look like a face from the side.
We train a generative 3D model of shapes from natural images in an fully unsupervised way.
926
scitldr
We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs. We use Bayesian optimization (BO) to specifically cater to scenarios involving low query budgets to develop query efficient adversarial attacks. We alleviate the issues surrounding BO in regards to optimizing high dimensional deep learning models by effective dimension upsampling techniques. Our proposed approach achieves performance comparable to the state of the art black-box adversarial attacks albeit with a much lower average query count. In particular, in low query budget regimes, our proposed method reduces the query count up to 80% with respect to the state of the art methods. Neural networks are now well-known to be vulnerable to adversarial examples: additive perturbations that, when applied to the input, change the network's output classification . Work investigating this lack of robustness to adversarial examples often takes the form of a back-and-forth between newly proposed adversarial attacks, methods for quickly and efficiently crafting adversarial examples, and corresponding defenses that modify the classifier at either training or test time to improve robustness. The most successful adversarial attacks use gradient-based optimization methods , which require complete knowledge of the architecture and parameters of the target network; this assumption is referred to as the white-box attack setting. Conversely, the more realistic black-box setting requires an attacker to find an adversarial perturbation without such knowledge: information about the network can be obtained only through querying the target network, i.e., supplying an input to the network and receiving the corresponding output. In real-world scenarios, it is extremely improbable for an attacker to have unlimited bandwidth to query a target classifier. In evaluation of black box attacks, this constraint is usually formalized via the introduction of a query budget: a maximum number of queries allowed to the model per input, after which an attack is considered to be unsuccessful. Several recent papers have proposed attacks specifically to operate in this query-limited context (; 2018; ; ;); nevertheless, these papers typically consider query budgets on the order of 10,000 or 100,000. This leaves open questions as to whether black-box attacks can successfully attack a deep network based classifier in severely query limited settings, e.g., with a query budget of 100-200. In such a query limited regime, it is natural for an attacker to use the entire query budget, so we ask the pertinent question: In a constrained query limited setting, can one design query efficient yet successful black box adversarial attacks? This work proposes a black-box attack method grounded in Bayesian optimization , which has recently emerged as a state of the art black-box optimization technique in settings where minimizing the number of queries is of paramount importance. Straightforward application of Bayesian optimization to the problem of finding adversarial examples is not feasible: the input dimension of even a small neural network-based image classifier is orders of magnitude larger than the standard use case for Bayesian optimization. Rather, we show that we can bridge this gap by performing Bayesian optimization in a reduced-dimension setting and upsampling to obtain our final perturbation. We explore several upsampling techniques and find that a relatively simple nearest-neighbor upsampling method allows us to sufficiently reduce the optimization prob-lem dimension such that Bayesian optimization can find adversarial perturbations with more success than existing black-box attacks in query-constrained settings. We compare the efficacy of our adversarial attack with a set of experiments attacking three of the most commonly used pretrained ImageNet classifiers: ResNet50 , Inception-v3 , and VGG16-bn . Results from these experiments show that with very small query budgets (under 200 queries), the proposed method BAYES-ATTACK achieves success rates comparable to or exceeding existing methods, and does so with far smaller average and median query counts. Further experiments are performed on the MNIST dataset to compare how various upsampling techniques affect the attack accuracy of our method. Given these we argue that, despite being a simple approach (indeed, largely because it is such a simple and standard approach for black-box optimization), Bayesian Optimization should be a standard baseline for any black-box adversarial attack task in the future, especially in the small query budget regime. Within the black-box setting, adversarial attacks can be further categorized by the exact nature of the information received from a query. The most closely related work to our approach are score-based attacks, where queries to the network return the entire output layer of the network, either as logits or probabilities. Within this category, existing approaches draw from a variety of optimization fields and techniques. One popular approach in this area is to attack with zeroth-order methods via some method of derivative-free gradient estimation, as in methods proposed in , which uses time-dependent and data-dependent priors to improve the estimate, as well as , which replaces the gradient direction found using natural evolution strategies (NES). Other methods search for the best perturbation outside of this paradigm; cast the problem of finding an adversarial perturbation as a discrete optimization problem and use local search methods to solve. These works all search for adversarial perturbations within a search space with a hard constraint on perturbation size; other work incorporates a soft version of this constraint and performs coordinate descent to decrease the perturbation size while keeping the perturbed image misclassified. The latter of these methods incorporates an autoencoderbased upsampling method with which we compare in Section 5.3.1. One may instead assume that only part of the information from the network's output layer is received as the of a query. This can take the form of only receiving the output of the top k predicted classes , but more often the restrictive decision-based setting is considered. Here, queries yield only the predicted class, with no probability information. The most successful work in this area is in , which reformulates the problem as a search for the direction of the nearest decision boundary and solves using a random gradient-free method, and in and, both of which use random walks along the decision boundary to perform an attack. The latter work significantly improves over the former with respect to query efficiency, but the number of queries required to produce adversarial examples with small perturbations in this setting remains in the tens of thousands. A separate class of transfer-based attacks train a second, fully-observable substitute network, attack this network with white-box methods, and transfer these attacks to the original target network. These may fall into one of the preceding categories or exist outside of the distinction: in , the substitute model is built with score-based queries to the target network, whereas trains an ensemble of models without directly querying the network at all. These methods come with their own drawbacks: they require training a substitute model, which may be costly or time-consuming, and overall attack success tends to be lower than that of gradient-based methods. Finally, there has been some recent interest in leveraging Bayesian optimization for constructing adversarial perturbations. Bayesian optimization (BO) has played a supporting role in several methods. For example, use BO to solve the δ-step of an alternating direction of method multipliers (ADMM) approach, search within a set of procedural noise perturbations using use BO to find maximal distortion error by optimizing perturbations defined using 3 parameters. On the other hand, prior work in which Bayesian optimization plays a central role performs experiments only in relatively low-dimensional problems, highlighting the main challenge of its application: examines an attack on a spam email classifier with 57 input features, and in image classifiers are attacked but notably do not scale beyond MNIST classifiers. In contrast to these past works, the main contribution of this paper is to show that Bayesian Optimization presents a scalable, query-efficient approach for large-scale black-box adversarial attacks, when combined with upsampling procedures. The following notation and definitions will be used throughout the remainder of the paper. Let F be the target neural network. We assume that F: K is a K-class image classifier that takes normalized inputs: each dimension of an input x ∈ R d represents a single pixel and is bounded between 0 and 1, y ∈ {1, · · · K} denotes the original label, and the corresponding output F (x) is a K-dimensional vector representing a probability distribution over classes. Rigorous evaluation of an adversarial attack requires careful definition of a threat model: a set of formal assumptions about the goals, knowledge, and capabilities of an attacker . We assume that, given a correctly classified input image x, the goal of the attacker is to find a perturbation δ such that x + δ is misclassified, i.e., arg max k F (x + δ) = arg max k F (x). We operate in the score-based black-box setting, where we have no knowledge of the internal workings of the network, and a query to the network F yields the entire corresponding K-dimensional output. To enforce the notion that the adversarial perturbation should be small, we take the common approach of requiring that δ p be smaller than a given threshold in some p norm, where varies depending on the classifier. This work considers the ∞ norm, but our attack can easily be adapted to other norms. Finally, we denote the query budget with t; if an adversarial example is not found after t queries to the target network, the attack fails. As in most work, we pose the attack as a constrained optimization problem. We use an objective function suggested by and used in;: where Most importantly, the input x + δ to f is an adversarial example for F if and only if f (x, y, δ) > 0. We briefly note that the above threat model and objective function were chosen for simplicity and for ease of directly comparing with other black box attacks, but the attack method we propose is compatible with many other threat models. For example, we may change the goals of the attacker or measure δ in 1 or 2 norms instead of ∞ with appropriate modifications to the objective function and constraints in equation 1. In this section, we present the proposed black-box attack method. We begin with a brief description of Bayesian optimization followed by its application to generate black-box adversarial examples. Finally, we describe our method for attacking a classifier trained with highdimensional inputs (e.g. ImageNet) in a query-efficient manner. Bayesian Optimization (BO) is a method for black box optimization particularly suited to problems with low dimension and expensive queries. Bayesian Optimization consists of two main components: a Bayesian statistical model and an acquisition function. The Bayesian statistical model, also referred to as the surrogate model, is used for approximating the objective function: it provides a Bayesian posterior probability distribution that describes potential values for the objective function at any candidate point. This posterior distribution is updated each time we query the objective function at a new point. The most common surrogate model for Bayesian optimization are Gaussian processes (GPs) , which define a prior over functions that are cheap to evaluate and are updated as and when new information from queries becomes available. We model the objective function h using a GP with prior distribution N (µ 0, Σ 0) with constant mean function µ 0 and Matern kernel as the covariance function Σ 0, which is defined as: where d is the dimension of input and {θ i} d i=0 and µ 0 are hyperparameters. We select hyperparameters that maximize the posterior of the observations under a prior . The second component, the acquisition function A, assigns a value to each point that represents the utility of querying the model at this point given the surrogate model. We sample the objective function h at x n = arg max x A(x|D 1:n−1) where D 1:n−1 comprises of n − 1 samples drawn from h so far. Although this itself may be a hard (non-convex) optimization problem to solve, in practice we use a standard approach and approximately optimize this objective using the LBFGS algorithm. There are several popular choices of acquisition function; we use expected improvement (EI) , which is defined as where E n [·] = E[·|D 1:n−1] denotes the expectation taken over the posterior distribution given evaluations of h at x 1, · · ·, x n−1, and h * n is the best value observed so far. Bayesian optimization framework as shown in Algorithm 2 runs these two steps iteratively for the given budget of function evaluations. It updates the posterior probability distribution on the objective function using all the available data. Then, it finds the next sampling point by optimizing the acquisition function over the current posterior distribution of GP. The objective function h is evaluated at this chosen point and the whole process repeats. In theory, we may apply Bayesian optimization directly to the optimization problem in equation 1 to obtain an adversarial example, stopping once we find a point where the the objective function rises above 0. In practice, Bayesian optimization's speed and overall performance fall dramatically as the input dimension of the problem increases. This makes running Bayesian optimization over high dimensional inputs such as ImageNet (input dimension 3 × 299 × 299 = 268203) practically infeasible; we therefore require a method for reducing the dimension of this optimization problem. Images tend to exhibit spatial local similarity i.e. pixels that are close to each other tend to be similar. showed that this similarity also extends to gradients and used this to reduce query complexity. Our method uses this data dependent prior to reduce the search dimension of the perturbation. We show that the adversarial perturbations also exhibit spatial local similarity and we do not need to learn the adversarial perturbation conforming to the actual dimensions of the image. Instead, we learn the perturbation in a much lower dimension. We obtain our final adversarial perturbation by interpolating the learned, low-dimension perturbation to the original input dimension. We define the objective function for running the Bayesian optimization in low dimension in Algorithm 1. We let Π p B(0,) be the projection onto the ∞ ball of radius centered at origin. Our method finds a low dimension perturbation and upsamples to obtain the adversarial perturbation. Since this upsampled image may not lie inside the ball of radius centered at the origin, we project back to ensure δ ∞ remains bounded by. With the perturbation δ in hand, we compute the objective function of the original optimization problem defined in equation 1. We describe the complete algorithm our complete framework in Algorithm 2 where x 0 ∈ R d and y 0 ∈ {1, . . ., K} denote the original input image and label respectively. The goal is to learn an adversarial perturbation δ ∈ R d in much lower dimension, i.e., d < < d. We begin with a small dataset D = {(δ 1, v 1), · · ·, (δ n0, v n0)} where each δ n is a d dimensional vector sampled from a Quering randomly chosen n 0 points. Update the GP on D Updating posterior distribution using available points 4: t ← n 0 Updating number of queries till now 5: while t ≤ T do 6: Optimizing the acquisition function over the GP 7: Querying the model 8: given distribution and v n is the function evaluation at δ n i.e v n = OBJ-FUNC(x 0, y 0, δ n). We iteratively update the posterior distribution of the GP using all available data and query new perturbations obtained by maximizing the acquisition function over the current posterior distribution of GP until we find an adversarial perturbation or run out of query budget. The Bayesian optimization iterations run in low dimension d but for querying the model we upsample, project and then add the perturbation to the original image as shown in Algorithm 1 to get the perturbed image to conform to the input space of the model. To generate a successful adversarial perturbation, it is necessary and sufficient to have v t > 0, as described in Section 3. We call our attack successful with t queries to the model if the Bayesian optimization loop exits after t iterations (line 12 in Algorithm 2), otherwise it is unsuccessful. Finally, we note that the final adversarial image can be obtained by upsampling the learned perturbation and adding to the original image as shown in Figure 1. In this work, we focus on ∞ -norm perturbations, where projection is defined as: where is the given perturbation bound. The upsampling method can be linear or non-linear. In this work, we conduct experiments using nearest neighbor upsampling. A variational autoencoder or vanilla autoencoder could also be trained to map the low dimension perturbation to the original input space. We compare these different upsampling schemes in Section 5.3.1. The initial choice of the dataset D to form a prior can be done using standard normal distribution, uniform distribution or even in a deterministic manner (e.g. with Sobol sequences). Our experiments focus on the untargeted attack setting where the goal is to perturb the original image originally classified correctly by the classification model to cause misclassification. We primarily consider performance of BAYES-ATTACK on ImageNet classifiers and compare its performance to other black-box attacks in terms of success rate over a given query budget. We also perform ablation studies on the MNIST dataset by examining different upsampling techniques and varying the latent dimension d of the optimization problem. We define success rate as the ratio of the number of images successfully perturbed for a given query budget to the total number of input images. In all experiments, images that are already misclassified by the target network are excluded from the test set; only images that are initially classified with the correct label are attacked. For each method of attack and each target network, we compute the average and median number of queries used to attack among images that were successfully perturbed. We treat the latent dimension d used for running the Bayesian optimization loop as a hyperparameter. For MNIST, we tune the latent dimension d over {16, 64, 100, 256, 784}. Note that 784 is the original input dimension for MNIST. While for ImageNet, we search for latent dimension d and shape over the range {48(3 × 4 × 4), 49(1 × 7 × 7), 100(1 × 10 × 10), 108(3 × 6 × 6), 400(1 × 20 × 20), 432(3 × 12 × 12), 576(1 × 24 × 24), 588(3 × 14 × 14), 961(1 × 31 × 31), 972(3 × 18 × 18)}. For ImageNet, the latent shapes with first dimension as 1 indicate that the same perturbation is added to all three channels while the ones with 3 indicate that the perturbation across channels are different. In case of ImageNet, we found that for ResNet50 and VGG16-bn different perturbation across channels work much better than adding the same perturbation across channels. While for Inception-v3, both seem to work equally well. We initialize the GP with n 0 = 5 samples sampled from a standard normal distribution. For all the experiments in next section, we use expected improvement as the acquisition function. We also examined other acquisition functions (posterior mean, probability of improvement, upper confidence bound) and observed that our method works equally well with other acquisition functions. We independently tune the hyper-parameters on a small validation set and exclude it from our final test set. We used BoTorch 1 packages for implementation. We compare the performance of the proposed method BAYES-ATTACK against NES , BANDITS-TD and PARSIMONIOUS , which is the current state of the art among score-based black-box attacks within the ∞ threat model. On ImageNet, we attack the pretrained 2 ResNet50 , Inception-v3 and VGG16-bn . We use 10,000 randomly selected images (normalized to ) from the ImageNet validation set that were initially correctly classified. We set the ∞ perturbation bound to 0.05 and evaluate the performance of all the methods for low query budgets. We use the implementation NES and BANDITS-TD. Similarly for PARSIMONIOUS, we use the implementation 4 and hyperparameters given by. Figure 2 compares the performance of the proposed method BAYES-ATTACK against the set of baseline methods in terms of success rate at different query budgets. We can see that BAYES-ATTACK consistently performs better than baseline methods for query budgets < 200. Even for query budgets > 200, BAYES-ATTACK achieves better success rates than BANDITS-TD and NES on ResNet50 and VGG16-bn. Finally, we note that for higher query budgets (> 1000), both PARSIMONIOUS and BANDITS-TD method perform better than BAYES-ATTACK. To compare the success rate and average/median query, we select a point on the plots shown in Figure 2. Table 1 compares the performance of all the methods in terms of success rate, average and median query for a query budget of 200. We can see that BAYES-ATTACK achieves higher success rate with 80% less average queries as compared to the next best PARSIMONIOUS method. Thus, we argue that although the Bayesian Optimization adversarial attack approach is to some extent a "standard" application of traditional Bayesian Optimization methods, the performance over the existing state of the art makes it a compelling approach particularly for the very low query setting. We also compare the average 2 distortion of the generated adversarial perturbations in Appendix B. Success rate (a) Performance comparison with different upsampling schemes. For MNIST, we use the pretrained network (used in) with 4 convolutional layers, 2 max-pooling layers and 2 fully-connected layers which achieves 99.5% accuracy on MNIST test set. We conduct ∞ untargeted adversarial attacks with perturbation bound = 0.2 on a randomly sampled 1000 images from the test set. All the experiments performed on MNIST follow the same protocols. The proposed method requires an upsampling technique for mapping the perturbation learnt in the latent dimension to the original input dimension. In this section, we examine different linear and non-linear upsampling schemes and compare their performance on MNIST. The approaches we consider here can be divided into two broad groups: Encoder-Decoder based methods and Interpolation methods. For interpolation-based methods, we consider nearest-neighbor, bilinear and bicubic interpolation. For encoder-decoder based approaches, we train a variational autoencoder by maximizing a variational lower bound on the log marginal likelihood. We also consider a simple autoencoder trained by minimizing the mean squared loss between the generated image and the original image. For both the approaches, we run the Bayesian optimization loop in latent space and use the pretrained decoder (or generator) for mapping the latent vector into image space. For these approaches, rather than searching for adversarial perturbation δ in the latent space, we learn the adversarial image x + δ directly using the Bayesian optimization. Figure 3a compares the performance of different upsampling methods. We can see that Nearest Neighbor (NN) interpolation and VAE-based decoder perform better than rest of the upsampling schemes. However, the NN interpolation achieves similar performance to the VAE-based method but without the need of a large training dataset which is required for accurately training a VAEbased decoder. We perform a sensitivity analysis on the latent dimension hyperparameter d used for running the Bayesian optimization. We vary the latent dimension over the range d ∈ {9, 16, 64, 100, 256, 784}. Figure 3b shows the performance of nearest neighbor interpolation method for different latent dimension. We observe that lower latent dimensions achieve better success rates than the original input dimension d = 784 for MNIST. This could be because with increase in search dimension, Bayesian optimization needs more queries to find successful perturbation. We also note that for the case of latent dimension d = 9, BAYES-ATTACK achieves lower success rates which could mean that it is hard to find adversarial perturbations in such low dimension. We show the convergence plots of BAYES-ATTACK on ImageNet and MNIST in Appendix A. We considered the problem of black-box adversarial attacks in settings involving constrained query budgets. We employed Bayesian optimization based method to construct a query efficient attack strategy. The proposed method searches for an adversarial perturbation in low dimensional latent space using Bayesian optimization and then maps the perturbation to the original input space using the nearest neighbor upsampling scheme. We successfully demonstrated the efficacy of our method in attacking multiple deep learning architectures for high dimensional inputs. Our work opens avenues regarding applying BO for adversarial attacks in high dimensional settings. In this section, we show the convergence of Bayesian Optimization (BO) in terms of objective value versus the number of queries. Note that we have framed our objective of finding adversarial perturbation to be a maximization problem and we stop the iteration loop of BO once the objective value reaches a positive value. A positive objective value corresponds to a successful adversarial perturbation as described in Section 3. Figure 4 shows the convergence of objective function in the BAYES-ATTACK on RESNET50 trained on ImageNet as described in Section 5.2. We run the BO in 972 dimensions (3 × 18 × 18) and upsample the perturbation to the original input dimension of 150, 528(3×224×224). The plot shows ten randomly chosen images from the validation set, with different colors representing different images. We also show the convergence of the BO on MNIST by varying the latent dimension. Specifically, we compare the convergence with latent dimension 16(4 × 4) and the original input dimension 784(28 × 28). The plot is shown in Figure 5. Each color represents a test image, while dashed lines and solid lines represent runs of BO using 16 and 784 dimensions, respectively. As we can see from the graph, BO in 784 dimensions does not converge to a successful attack (i.e., objective value > 0) in 500 iterations on either of the images, while BO with 16 dimensions on the same images finds the adversarial perturbation in less than 200 iterations. This aligns with our observation that with increase in latent dimension, it becomes harder for BO to find successful perturbation and indeed it would require much more queries than running BO in lower dimensions. We compare the average 2 distortion per image of the proposed method BAYES-ATTACK with the current state-of-the-art methods including gradient-based approaches on RESNET50 trained on ImageNet. We fix the query budget at 200 similar to the experiments described in Section 5.2 and compute the distortion using only the successful adversarial perturbations. As we can see from Table 2, the 2 distortion of adversarial examples generated using BAYES-ATTACK is almost similar to the current state-of-the-art methods but achieves better attack success rate in low query budget regimes. Having said that, as in BANDITS-TD and PARSIMONIOUS , our approach focuses on finding successful adversarial perturbations subject to a pre-defined maximum distortion specified in terms of ∞ distance.
We show that a relatively simple black-box adversarial attack scheme using Bayesian optimization and dimension upsampling is preferable to existing methods when the number of available queries is very low.
927
scitldr
State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones. The compression of DNN models has therefore become an active area of research recently, with \emph{connection pruning} emerging as one of the most successful strategies. A very natural approach is to prune connections of DNNs via $\ell_1$ regularization, but recent empirical investigations have suggested that this does not work as well in the context of DNN compression. In this work, we revisit this simple strategy and analyze it rigorously, to show that: (a) any \emph{stationary point} of an $\ell_1$-regularized layerwise-pruning objective has its number of non-zero elements bounded by the number of penalized prediction logits, regardless of the strength of the regularization; (b) successful pruning highly relies on an accurate optimization solver, and there is a trade-off between compression speed and distortion of prediction accuracy, controlled by the strength of regularization. Our theoretical thus suggest that $\ell_1$ pruning could be successful provided we use an accurate optimization solver. We corroborate this in our experiments, where we show that simple $\ell_1$ regularization with an Adamax-L1(cumulative) solver gives pruning ratio competitive to the state-of-the-art. State-of-the-art Deep Neural Networks (DNNs) typically have millions of parameters. For example, the VGG-16 network BID0 ), from the winning team of ILSVRC-2014, contains more than one hundred million parameters; inference with this network on a single image takes tens of billions of operations, prohibiting its use on edge devices such as mobile phones or in real-time applications. In addition, the huge size of DNNs often precludes them from being placed at the upper level of the memory hierarchy, with ing slow access times and expensive energy consumption. A recent thread of research has thus focused on the question of how to compress DNNs. One successful approach that has emerged is to trim the connections between neurons, which reduces the number of non-zero parameters and thus the model size BID1 b); BID3; BID4; BID5; BID6; BID7 ). However, there has been a gap between the theory and practice: the trimming algorithms that have been practically successful BID1 b); BID3 ) do not have theoretical guarantees, while theoretically-motivated approaches have been less competitive compared to the heuristics-based approaches BID5, and often relies on stringent distributional assumption such as Gaussian-distributed matrices which might not hold in practice. With a better theoretical understanding, we might be able to answer how much pruning one can achieve via different approaches on different tasks, and moreover when a given pruning approach might or might not work. Indeed, as we discuss in our experiments, even the generally practically successful approaches are subject to certain failure cases. Beyond simple connection pruning, there have been other works on structured pruning that prune a whole filter, whole row, or whole column at a time BID8; BID9; BID10;; BID12 ). The structured pruning strategy can often speed up inference speed at prediction time more than simple connection pruning, but the pruning ratios are typically not as high as non-structured connection pruning; so that the storage complexity is still too high, so that the caveats we noted earlier largely remain. A very natural strategy is to use 1 regularized training to prune DNNs, due to their considerable practical success in general sparse estimation in shallow model settings. However, many recent investigations seemed to suggest that such 1 regularization does not work as well with non-shallow DNNs, especially compared to other proposed methods. Does 1 regularization not work as well in non-shallow models? In this work, we theoretically analyze this question, revisit the trimming of DNNs through 1 regularization. Our analysis provides two interesting findings: (a) for any stationary point under 1 regularization, the number of non-zero parameters in each layer of a DNN is bounded by the number of penalized prediction logits-an upper bound typically several orders of magnitude smaller than the total number of DNN parameters, and (b) it is critical to employ an 1 -friendly optimization solver with a high precision in order to find the stationary point of sparse support. Our theoretical findings thus suggest that one could achieve high pruning ratios even via 1 regularization provided one uses high-precision solvers (which we emphasize are typically not required if we only care about prediction error rather than sparsity). We corroborate these findings in our experiments, where we show that solving the 1 -regularized objective by the combination of SGD pretraining and Adamax-L1(cumulative) yields competitive pruning compared to the state-ofthe-art. Let DISPLAYFORM0 p × K 0 be an input tensor where N is the number of samples (or batch size). We are interested in DNNs of the form DISPLAYFORM1 where σ W (j) (X (j−1) ) are piecewise-linear functions of both the parameter tensor W (j): DISPLAYFORM2 Examples of such piecewise-linear function include:(a) convolution layers with Relu activation (using • to denote the p-dimensional convolution operator) DISPLAYFORM3 fully-connected layers with Relu activation DISPLAYFORM4 commonly used operations such as max-pooling, zero-padding and reshaping. Note X (J): N × K provide K scores (i.e. logits) of each sample that relate to the labels of our DISPLAYFORM5 as the task-specific loss function. We define Support Labels of a DNN X (J) as indices (i, k) of non-zero loss subgradient w.r.t. the prediction logit:Definition 1 (Support Labels). Let L(X, Y) be a convex loss function w.r.t. the prediction logits X. The Support Labels regarding DNN outputs X (J) (W) are defined as DISPLAYFORM6 We will denote k S (W):= |S(W)| N ≤ K as the average number of support labels per sample. We illustrate these concepts in the context of some standard machine learning tasks. Multiple Regression. In multiple regression, we are interested in multiple real-valued labels, such as the location and orientation of objects in an image, which over the set of N samples, can be expressed as an N × K real-valued matrix Y. A popular loss function for such tasks is: which is convex and differentiable, and in general we have [∇ X L] i,k = 0, therefore all labels are support labels (i.e. k S = K). DISPLAYFORM7 Binary Classification. In binary classification, the labels are binary-valued, and over the set of N samples, can be represented as a binary vector y ∈ {−1, 1} N. Popular loss functions include the logistic loss: DISPLAYFORM8 and the hinge loss: DISPLAYFORM9 For the logistic loss, we have DISPLAYFORM10 On the other hand, the hinge loss typically has only a small portion of samples DISPLAYFORM11 Vectors, and which coincides with our definition of Support Labels in this context. In applications with unbalanced positive and negative examples, such as object detection, we have k S 1. Multiclass/Multilabel Classification. In multiclass or multilabel classification, the labels of each sample can be represented as a K-dimensional binary vector {0, 1}K where 1/0 denotes the presence/absence of a class in the sample. Let P i:= {k | y ik = 1} and N i:= {k | y ik = 0} denote the positive and negative label sets. Popular loss functions include the cross-entropy loss: DISPLAYFORM12 and the maximum margin loss: DISPLAYFORM13 Although the cross-entropy loss has number of support labels k S = K, it has been shown that the maximum-margin loss typically has k S K in recent studies of classification problems with extremely large number of classes BID13 ). In this section, we aim to solve the following DNN compression problem. Definition 2 (Deep-Trim ). Suppose we are given a target loss function L(X, Y) between prediction X and training labels Y: N × K, and a pre-trained DNN X (J) parameterized by weights DISPLAYFORM0 The Deep-Trim task is to find a compressed DNN with weights W such that its number of non-zero parameters nnz(W) ≤ τ, for some τ nnz(W) and where DISPLAYFORM1 In the following, we show that the Deep-Trim problem with budget τ = (N k S) × J can be solved via simple 1 regularization under a couple of mild conditions, with the caveat that with suitable optimization algorithms be used, and where k S is the maximum number of support labels for any DISPLAYFORM2 Trimming Objective Given a loss function L(., Y) and a pre-trained DNN parameterized by DISPLAYFORM3, we initialize the iterate with W * and apply an optimization algorithm that guarantees descent of the following layerwise 1 -regularized objective DISPLAYFORM4 for all j ∈ [J], where vec(W (j) ) denotes the vectorized version of the tensor W (j).The following theorem states that most of stationary points of FORMULA18 have the number of non-zero parameters per layer bounded by the total number of support labels in the training set. Theorem 1 (Deep-Trim with 1 penalty). Let W (j) be any stationary point of objective with dim(W (j) )=d that lies on a single linear piece of the piecewise-linear function X (J) (W). Let V: (N K) × d be the Jacobian matrix of that corresponding linear piece of the linear (vector-valued) function vec(X (J) )(vec(W (j) )). For any regularization parameter λ > 0 and V in general position we have DISPLAYFORM5 where k S (W) is the average number of support labels of the stationary point W (j).Proof. Any stationary point of FORMULA18 should satisfy the condition DISPLAYFORM6 where A ∈ ∂L is an N × K subgradient matrix of the loss function w.r.t. the prediction logits, and DISPLAYFORM7 be the set of indices of non-zero parameters, we have [ρ] r ∈ {−1, 1} and thus the linear system DISPLAYFORM8 cannot be satisfied if nnz(A) < nnz(W (j) ) for V is in general position (as defined in, for example, BID15). Therefore, we have nnz DISPLAYFORM9 Note the concept of general position is studied widely in the literature of LASSO and sparse recovery, and it is a weak assumption in the sense that any matrix drawn from a continuous probability distribution is in general position BID15 ). Figure 1 illustrates an example of a regression task where, no matter how small λ > 0 is, the second coordinate is always 0 at the stationary point. Note since Theorem 1 holds for any λ > 0, one can guarantee to trim a DNN without hurting the training loss by choosing an appropriately small λ, as stated by the following corollary. Corollary 1 (Deep-Trim without Distortion). Given a DNN with weights W and with loss DISPLAYFORM10 where k S is a bound on the number of support labels of parameters W with loss no more than L * +.Proof. By choosing λ ≤ /(J vec(W (j) ) 1 ), any descent optimization algorithm can guarantee to findŴ (j) with DISPLAYFORM11 Then by applying the procedure for each layer j ∈ [J], one can obtainŴ with DISPLAYFORM12 In practice, however, the smaller λ, the harder for the optimization algorithm to get close to the stationary point, as illustrated in the figure 1. Therefore, it is crucial to choose optimization algorithms targeting for high precision for the convergence to the stationary point of with sparse support, while the widely-used Stochastic Gradient Descent (SGD) method is notorious for being inaccurate in terms of the optimization precision. Although our analysis is conducted on the layerwise pruning objective, in practice we have observed joint pruning of all layers to be as effective as layerwise pruning. For ease of presentation of this section, we will denote our objective function min DISPLAYFORM0 in the following form min DISPLAYFORM1 where w:= vec(W) and f (w):= L(X (J) (W), Y ). Note the same formulation can be also used to represent the layerwise pruning objective by simply replacing their definitions as DISPLAYFORM2 As mentioned previously, even when the stationary point of an objective has sparse support, if the optimization algorithm does not converge close enough to the stationary point, the iterates would still have very dense support. In this section, we propose a two-phase strategy for the non-convex optimization problem. In the first phase, we initialize with the given model and use a simple Stochastic Gradient Descent (SGD) algorithm to optimize. During this phase, we do not aim to reduce the number of non-zero parameters but only to reduce the 1 norm of the model parameters. We run the SGD till both the training loss and 1 norm of model parameters have converged. Then in the second phase, we employ an Adamax-L1 (cumulative) method to reduce the total number of non-zero parameters, and achieves pruning on-par with state-of-the-art methods. SGD with L1 Penalty For a simple optimization problem min w∈R d f (w), the SGD update follows the form w t+1 = w t − η t ∂f (w)∂w. We consider general SGD-like algorithms which update in the form w t+1 = w t − η t g(DISPLAYFORM3 ∂w, θ), where θ is a set of parameters specific to the SGD-like update procedure. This includes the commonly used Momentum , Adamax, Adam (Kingma and Ba FORMULA0), and RMSProp (Tieleman and Hinton FORMULA0) optimization algorithms. When employing SGD-like optimizers, can be rewritten as the following: DISPLAYFORM4 where j denotes one mini-batch of data and N b is the number of mini-batches. The weight updated by the SGD-like optimizers can then be performed as DISPLAYFORM5 where sign(w i) = 0 when w i = 0. We note that after the update in, the weight does not become 0 unless w DISPLAYFORM6, which rarely happens. Therefore, adding the L1 penalty term to SGD-like optimizers only minimizes the L1-norm but does not induce a sparse weight matrix. To achieve a sparse solution, we combine the L1 friendly update trick SGD-L1 (cumulative) BID19 ) along with SGD-like optimization algorithms. Adamax-L1 (cumulative) SGD-L1 (clipping) is an alternative to perform L1 regularizing along with SGD to obtain a sparse w BID20 ). Different to, SGD-L1 (clipping) divides the update into two steps. The first step is updated without considering the L1 penalty term, and the second step updates the L1 penalty separately. In the second step, any weight that has changed its sign during the update will be set to 0. In other words, when the L1 penalty is larger than the weight value, it will be truncated to the weight value. Therefore, SGD-L1 (clipping) can be seen as a special case of truncated gradient. With a learning rate η k, the update algorithm can be written as DISPLAYFORM7 SGD-L1 (cumulative) is a modification of the SGD-L1 (clipping) algorithm proposed by BID19, but uses the cumulative L1 penalty instead of the standard L1 penalty. The intuition is that the cumulative L1 penalty is the amount of penalty that would be applied on the weight if true gradient is applied instead of stochastic gradient. By applying the cumulative L1 penalty, the weight would not be moved away from zero by the noise of the stochastic gradient. When applied to SGD-like optimization algorithms, the update rule can be written as DISPLAYFORM8 where q k i is the total amount of L1 penalty received until now q DISPLAYFORM9 By updating with and adopting the Adamax optimization algorithm (Kingma and Ba FORMULA0, we obtain Adamax-L1 (cumulative). Originally, SGD-L1 (cumulative) was proposed to be used with the vanilla SGD optimizer, where we generalize it to be used with any SGD-like optimizer by separating the update on objective f (w) and the l1-cumulative update on λ w 1. In this section, we compare the -regularized pruning method discussed in section 4 with other state-of-the-art approaches. In section 5.1, we evaluate different pruning methods on the convolution network LeNet-5 1 on the Mnist data set. In section 5.2, we compare our method to VD on pruning VGG-16 network BID0 on the CIFAR-10 data set. In section 5.3, we then conduct experiments with Resnet on CIFAR-10. Finally, we show the trade-off for pruning Resnet-50 on the ILSVRC dataset. Acc.% nnz per Layer% Table 1: Compression Results with LeNet-5 model on MNIST. We first compare our methods with other compression methods on the standard MNIST dataset with the LeNet-5 architecture. We consider the following methods: Prune: The pruning algorithm proposed in BID1, which iterates between pruning the network after training with L2 regularization and retraining. DNS: Dynamic Network Surgery pruning algorithm proposed in BID3, which was reported to improve upon the iterative pruning method proposed in BID1 by dynamically pruning and splicing variables during the training process. VD: Variational Dropout method introduced by BID4, a variant of dropout that induces sparsity during the training process with unbounded dropout rates. L1 Naive: Ablation study of our method by training the 1 -regularized objective with SGD. Ours: Our method which optimizes the 1 -regularized objective in two phases (SGD and Adamax-L1(cumulative)). The LeNet-5 network is trained from a random initialization and without data augmentation which achieves 99.2% accuracy. We report the per layer sparsity and the total reduction of weights and Table 2: Compression Results with VGG-like model on CIFAR-10 for VD and our method. FLOP in Table 1. For LeNet-5, our method achieves comparable sparsity performance against other methods, with a slight accuracy drop. Nevertheless, our compressed model still achieves over 99 percent testing accuracy, while achieving 260× weight reduction and 84× FLOP reduction. We also observe that the L1 Naive does not induce any sparsity, even when the L1-norm is significantly reduced. This demonstrates the effectiveness of adopting a L1-friendly optimization algorithm. To test how our method works on large scale modern architecture, we perform experiments on the VGG-like network with CIFAR-10 dataset, which is used in BID4. The network contains 13 convolution layers and 2 fully connected layers and achieves 92.9% accuracy with pretraining. We report the per layer weights and FLOP reduction for our Deep-Trim algorithm and VD BID4 ) in Table 5. Our model achieves a weight pruning ratio of 57× and reduces FLOP by 7.7× with a negligible accuracy drop, and VD achieves 48× weight pruning ratio and reduces FLOP by 6.4×. 2 Compared to VD, our model achieved sparser weights from Conv1_1 to Conv5_2 and VD achieved sparser weights from Conv5_2 to FC layers. Interestingly, we observe that in both pruning methods, most remaining nnz and FLOPs lie in block2 and block3, where originally block4 and block5 have dominating amount of weights and equal amount of FLOPs. The layer with the most non-zero parameters after pruning is conv3_2 with 65.9K. In the experiments we employ the cross-entropy loss which has a number of support labels N K = 500K on the CIFAR-10 data set. We suspect a more careful analysis could improve our Theorem 1 to give a tighter bound for loss with entries of gradient close to 0 but not exactly 0, making the bound for cross-entropy loss closer to that of maximum-margin loss. 2 We ran the experiments based on authors' code and tuned the coefficient of dropout regularization loss within the interval [10 2, 10 −3] with binary search. We note that although we are able to reproduce the 48× weight reduction ratio in the VD paper, we are only able to achieve Acc. 92.2% instead of 92.7% as reported in their paper. While VGG-network are notorious for its large parameter size, it is not surprising that a large compression rate can be achieved. Therefore, we evaluate the compression performance of our Deep-Trim algorithm on a smaller Resnet-32 model trained on CIFAR-10 data. The Resnet-32 model contains 3 main blocks. The first block contains the first 11 convolution layers with 64 filters in each layer, the second block contains the next 10 convolution layers with 128 filters each, and the last block contains 10 convolution layers with 256 filters and a fully connected layer. We list the detailed architecture in the supplementary. The pretrained Resnet-32 model reaches 94.0% accuracy. We evaluate our Deep-Trim algorithm and compare it to variational dropout BID4 ) and report the in TAB2. We report the pruning for each main block of the resnet-32 model. Our model achieves a 33× overall pruning ratio and 21× reduced FLOP with an accuracy drop of 1.4%, where VD has attained 28× overall pruning ratio and 13.5× reduction with similar accuracy. We further observe that nnz(W) increases much gentler from the first block to the third block compared to the total number of parameters in each block. This is not surprising since the upper bound of nnz(W) per layer given by Corollary 1 does not depend on the total number of unpruned parameters. Acc In this section, we compare the pruning of our method on VGG-16 with different number of samples. The pruning ratio and number of non-zero parameters are shown in TAB4, we can see that the number of non-zero parameters after pruning clearly grows with the number of samples. This can be understood intuitively, as the number of constraints to be satisfied grows in the training set, the more degree of freedom the model needs to fit the data. This shows that our theory analysis matches our empirical well. In this work, we revisit the simple idea of pruning connections of DNNs through 1 regularization. While recent empirical investigations suggested that this might not necessarily achieve high sparsity levels in the context of DNNs, we provide a rigorous theoretical analysis that does provide small upper bounds on the number of non-zero elements, but with the caveat that one needs to use a high-precision optimization solver (which is typically not needed if we care only about prediction error rather than sparsity). When using such an accurate optimization solver, we can converge closer to stationary points than traditional SGD, and achieve much better pruning ratios than SGD, which might explain the poorer performance of 1 regularization in recent investigations. We perform experiments across different datasets and networks and demonstrate state-of-the-art with such simple 1 regularization. Table 5: Per-layer Resnet-32 architecture. There are 3 main convolutional blocks with downsampling through stride=2 for the first layer of each block. After the convloutional layers, global pooling is applied on the spatial axes and a fully-connected layer is appended for the output. Each set of rows is a residual block.
We revisit the simple idea of pruning connections of DNNs through $\ell_1$ regularization achieving state-of-the-art results on multiple datasets with theoretic guarantees.
928
scitldr
Autonomy and adaptation of machines requires that they be able to measure their own errors. We consider the advantages and limitations of such an approach when a machine has to measure the error in a regression task. How can a machine measure the error of regression sub-components when it does not have the ground truth for the correct predictions? A compressed sensing approach applied to the error signal of the regressors can recover their precision error without any ground truth. It allows for some regressors to be strongly correlated as long as not too many are so related. Its solutions, however, are not unique - a property of ground truth inference solutions. Adding $\ell_1$--minimization as a condition can recover the correct solution in settings where error correction is possible. We briefly discuss the similarity of the mathematics of ground truth inference for regressors to that for classifiers. An autonomous, adaptive system, such as a self-driving car, needs to be robust to self-failures and changing environmental conditions. To do so, it must distinguish between self-errors and environmental changes. This chicken-andegg problem is the concern of ground truth inference algorithms -algorithms that measure a statistic of ground truth given the output of an ensemble of evaluators. They seek to answer the question -Am I malfunctioning or is the environment changing so much that my models are starting to break down?Ground truth inference algorithms have had a spotty history in the machine learning community. The original idea came from BID2 and used the EM algorithm to solve a maximum-likelihood equation. This enjoyed a brief renaissance in the 2000s due to advent of services like Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute. Amazon Turk. Our main critique of all these approaches is that they are parametric -they assume the existence of a family of probability distributions for how the estimators are committing their errors. This has not worked well in theory or practice BID4.Here we will discuss the advantages and limitations of a non-parametric approach that uses compressed sensing to solve the ground truth inference problem for noisy regressors BID1. Ground truth is defined in this context as the correct values for the predictions of the regressors. The existence of such ground truth is taken as a postulate of the approach. More formally, Definition 1 (Ground truth postulate for regressors). All regressed values in a dataset can be written as, DISPLAYFORM0 where y i,true does not depend on the regressor used. In many practical situations this is a very good approximation to reality. But it can be violated. For example, the regressors may have developed their estimates at different times while a y(t) i,true varied under them. We can now state the ground truth inference problem for regressors as, Definition 2 (Ground truth inference problem for regressors). Given the output of R aligned regressors on a dataset of size D, DISPLAYFORM1 estimate the error moments for the regressors, DISPLAYFORM2 and DISPLAYFORM3 without the true values, {y i,true}.The separation of moment terms that are usually combined to define a covariance 1 between estimators is deliberate and relates to the math for the recovery as the reader will understand shortly. As stated, the ground truth inference problem for sparsely correlated regressors was solved in BID1 by using a compressed sensing approach to recover the R(R + 1)/2 moments, δ r1 δ r2, for unbiased (δ r ≈ 0) regressors. Even the case of some of the regressors being strongly correlated is solvable. Sparsity of non-zero correlations is all that is required. Here we point out that the failure to find a unique solution for biased regressors still makes it possible to detect and correct biased regressors under the same sort of engineering logic that allows bit flip error correction in computers. We can understand the advantages and limitations of doing ground truth inference for regressors by simplifying the problem to that of independent, un-biased regressors. The inference problem then becomes a straightforward linear algebra one that can be understood without the complexity required when some unknown number of them may be correlated. Consider two regressors giving estimates, DISPLAYFORM0 By the Ground Truth Postulate, these can be subtracted to obtain,ŷ DISPLAYFORM1 Note that the left-hand side involves observable values that do not require any knowledge of y i,true. The right hand side contains the error quantities that we seek to estimate. Squaring both sides and averaging over all the datapoints in the dataset we obtain our primary equation, DISPLAYFORM2 Since we are assuming that the regressors are independent in their errors (δ r1 δ r2 ≈ 0), we can simplify 7 to, DISPLAYFORM3 This is obviously unsolvable with a single pair of regressors. But for three it is. It leads to the following linear algebra equation, An application of this simple equation to a synthetic experiment with three noisy regressors is shown in FIG0. Just like any least squares approach, and underlying topology for the relation between the different data points is irrelevant. Hence, we can treat, for purposes of experimentation, each pixel value of a photo as a ground truth value to be regressed by the synthetic noisy regressors -in this case with uniform error. To highlight the multidimensional nature of equation 6, we randomized each of the color channels but made one channel more noisy for each of the pictures. This simulates two regressors being mostly correct, but a third one perhaps malfunctioning. Since even synthetic experiments with independent regressors will in spurious non-zero cross-correlations, we solved the equation via least squares 2. DISPLAYFORM4 So why are these impressive not better known and a standard subject in Statistics 101 courses? There may be various reasons for this. The first one is that statistics concerns itself mostly with the imputation of the parameters of a model for the signal being studied, not the error of the regressors with themselves. We are not trying to impute properties of the true signal, but of the error signal between the regressors. A regressor may put out a signalŷ i,r, but its error signal δ i,r could be completely different. Additionally, Statistics has historically swayed from moment methods (such as the approach taken here) to maximum likelihood methods and back. Moment methods are much more practi-cal now with the advent of big data and cheap computing power. The other more important reason is that the above math fails for the case of biased regressors. We can intuitively understand that because eq. 6 is invariant to a global bias, ∆, for the regressors. We are not solving for the full average error of the regressors but their average precision error, DISPLAYFORM0 We can only determine the error of the regressors modulus some unknown global bias. This, by itself, would not be an unsurmountable problem since global shifts are easy to fix. From an engineering perspective, accuracy is cheap while precision is expensive 3. The more problematic issue is that it would not be able to determine correctly who is biased if they are biased relative to each other. Let us demonstrate that by using eq 6 to estimate the average bias, δ r, for the regressors. Averaging over both sides, we obtain for three independent regressors, the following equation 4, DISPLAYFORM1 The rank of this matrix is two. This means that the matrix has a one-dimensional null space. In this particular case, the subspace is spanned by a constant bias shift as noted previously. Nonetheless, let us consider the specific case of three regressors where two of them have an equal constant bias, DISPLAYFORM2 This would in the ∆ r1,r2 vector, DISPLAYFORM3 The general solution to Eq. 10 would then be, DISPLAYFORM4 This seems to be a failure for any ground truth inference for noisy regressors. Lurking underneath this math is the core idea of compressed sensing: pick the value of c for the solutions to eq. 14 that minimizes the 1 norm of the recovered vector. When such a point of view is taken, nonunique solutions to ground truth inference problems can be re-interpreted as error detecting and correcting algorithms. We explain. Suppose, instead, that only one of the three regressors was biased, DISPLAYFORM0 This would give the general solution, DISPLAYFORM1 with c an arbitrary, constant scalar. If we assume that errors are sparse, then an 1 -minimization approach would lead us to select the solution, DISPLAYFORM2 The algorithm would be able to detect and correct the bias of a single regressor. If we wanted more reassurance that we were picking the correct solution then we could use 5 regressors. When the last two have constant bias, the general solution is, DISPLAYFORM3 With the corresponding 1 -minimization solution of, DISPLAYFORM4 This is the same engineering logic that makes practical the use of error correcting codes when transmitting a signal over a noisy channel. Our contribution is to point out that the same logic also applies to estimation errors by regressors trying to recover the true signal. Figure 2. Recovered square error moments (circles), δr 1 δr 2, for the true error moments (squares) of 10 synthetic regressors on the pixels of a 1024x1024 image. Recovering algorithm does not know which vector components correspond to the strong diagonal signal, the (i,i) error moments. A compressed sensing algorithm for recovering the average error moments of an ensemble of noisy regressors exists. Like other ground truth inference algorithms, it leads to non-unique solutions. However, in many well-engineered systems, errors are sparse and mostly uncorrelated when the machine is operating normally. Algorithms such as this one can then detect the beginning of malfunctioning sensors and algorithms. We can concretize the possible applications of this technique by considering a machine such as a self-driving car. Optical cameras and range finders are necessary sub-components. How can the car detect a malfunctioning sensor? There are many ways this already can be done (no power from the sensor, etc.). This technique adds another layer of protection by potentially detecting anomalies earlier. In addition, it allows the creation of supervision arrangements such as having one expensive, precise sensor coupled with many cheap, imprecise ones. As the recovered error moment matrix in Figure 2 shows, many noisy sensors can be used to benchmark a more precise one (the (sixth regressor {6,6} moment in this particular case). As BID1 demonstrate, it can also be used on the final output of algorithms. In the case of a self-driving car, a depth map is needed of the surrounding environment -the output of algorithms processing the sensor input data. Here again, one can envision supervisory arrangements where quick, imprecise estimators can be used to monitor a more expensive, precise one. There are advantages and limitations to the approach proposed here. Because there is no maximum likelihood equation to solve, the method is widely applicable. The price for this flexibility is that no generalization can be made. There is no theory or model to explain the observed errors -they are just estimated robustly for each specific dataset. Additionally, the math is easily understood. The advantages or limitations of a proposed application to an autonomous, adaptive system can be ascertained readily. The theoretical guarantees of compressed sensing algorithms are a testament to this BID3. Finally, the compressed sensing approach to regressors can handle strongly, but sparsely, correlated estimators. We finish by pointing out that non-parametric methods also exist for classification tasks. This is demonstrated for independent, binary classifiers (with working code) in . The only difference is that the linear algebra of the regressor problem becomes polynomial algebra. Nonetheless, there we find similar ambiguities due to non-unique solutions to the ground truth inference problem of determining average classifier accuracy without the correct labels. For example, the polynomial for unknown prevalence (the environmental variable) of one of the labels is quadratic, leading to two solutions. Correspondingly, the accuracies of the classifiers (the internal variables) are either x or 1 − x. So a single classifier could be, say, 90% or 10% accurate. The ambiguity is removed by having enough classifiers -the preferred solution is where one of them is going below 50%, not the rest doing so.
A non-parametric method to measure the error moments of regressors without ground truth can be used with biased regressors
929
scitldr
We propose a new representation, one-pixel signature, that can be used to reveal the characteristics of the convolution neural networks (CNNs). Here, each CNN classifier is associated with a signature that is created by generating, pixel-by-pixel, an adversarial value that is the of the largest change to the class prediction. The one-pixel signature is agnostic to the design choices of CNN architectures such as type, depth, activation function, and how they were trained. It can be computed efficiently for a black-box classifier without accessing the network parameters. Classic networks such as LetNet, VGG, AlexNet, and ResNet demonstrate different characteristics in their signature images. For application, we focus on the classifier backdoor detection problem where a CNN classifier has been maliciously inserted with an unknown Trojan. We show the effectiveness of the one-pixel signature in detecting backdoored CNN. Our proposed one-pixel signature representation is general and it can be applied in problems where discriminative classifiers, particularly neural network based, are to be characterized. Recent progress in designing convolutional neural network architectures (; ; ; ;) has contributed, in part, to the explosive development in deep learning . Convolutional neural networks (CNN) have been adopted in a wide range of applications including image labeling (; ;), object detection , low-level image processing (; a;), artistic transfer , generative models (a; b;), image captioning , and 2D to 3D estimation/reconstruction ). Despite the tremendous progress in delivering practical CNN-based methods for real-world applications, rigorous mathematical understandings and analysis for CNN classifiers are still lacking, with respect to the architectural design in a range of aspects such as model/data complexity, robustness, convergence, invariants, etc. (; ; ; ;). Moreover, a problem has recently emerged at the intersection between machine learning and security where CNNs are trained with a backdoor, named as BadNets . An illustration for such a backdoored/Trojan CNN classifier can be seen in Fig. 1.b. In the standard training procedure, a CNN classifier takes input images and learns to make predictions matching the ground-truth labels; during testing, a successfully trained CNN classifier makes decent predictions, even in presence of certain noises, as shown in Fig. 1.a. However, if the training process is under a Trojan/backdoor attack, the ing CNN classifier becomes backdoored and vulnerable, making unexpected adverse predictions from the user point of view when seeing some particularly manipulated images, as displayed in Fig. 1.b. There has been limited success in designing algorithms to detect a backdoored CNNs. We develop one-pixel signature and make the following contributions. • To unfold CNN classifiers to perform e.g. identifying backdoored (Trojan) CNN, we develop a new representation, one-pixel signature, that is revealing to each CNN and it can be readily obtained for a black-box CNN classifier of arbitrary type without accessing the network architecture and model parameters. • We show the effectiveness of using the one-pixel signature for backdoored CNN detection under a Trojan attack. Various network architectures including LeNet , AlexNet , ResNet , DenseNet , and ResNeXt are studied. • We also illustrate the potential of using one-pixel signature for defending a Trojan attack on an object detector, Faster RCNN . • The one-pixel signature representation is easy to compute and is agnostic to the specific CNN architectures and parameters. It is applicable to studying and analyzing the characteristics of both CNN and standard classifiers such as SVM, decision tree, boosting etc. (a) CNN trained regularly (b) CNN trained with a backdoor displays a backdoored CNN, denoted as CNNT rojan, which is trained maliciously by inserting a "virus" pattern (a star) to a training sample and forcing the classification to a wrong label. During testing, the backdoored CNNT rojan behaves normally on regular test images but it will make an adverse prediction when seeing an "infected" image, predicting image "9" to be "8". Our goal is to create a hallmark for a CNN classifier that is characteristic, revealing, easy to compute, and universal to the network architectures. Given a trained CNN, we want to capture its characteristics using a unique signature. This makes existing attempts in visualizing CNN filters or searching for optimal neural structures and parameters not directly applicable. In the classical object recognition problem, a signature can be defined for an object by searching for the invariants in the filtering scale space ; one can also define point signatures for a 3D object . Although the term signature bears some similarity in high-level semantics, these existing approaches creating object signatures for the object recognition task have their distinct definitions and methodologies. With respect to the existing literature for characterizing neural networks, a rich body of methods have been proposed (; ; ; ;). In , discrete network parameters are mapped to continues embedding space for network optimization; however the specific autoencoder strategy in prevents it from detecting network backdoor of arbitrary network types. Similarly, approaches (; ;) that study the network representation similarity exist. In , a method to study the pathologies of the hypothesis space has been proposed, but their type of pathology is different from the backdoor problem. In general, while the existing methods (; ; ; ;) have pointed to very interesting and promising directions for network characterization, it is not clear how they can be extended to dealing with the network backdoor problem and agnostic network characterization, due to limitations such as fixed network type, white-box networks only, computational complexity, and lack of expressiveness to backdoored CNNs. Another related area to ours is adversarial attack (b) including both white-box and black-box ones (; ; ; ;). Adversarial attack (b) is different from Trojan attack (; tro, 2019) where the end goal in adversarial attack is to build robust CNNs against adversarial inputs (often images) whereas that in Trojan attack is to defend/detect if CNN classifiers themselves are compromised or not. Attacks to networks to create backdoors can be performed in multiple directions by maliciously and unnoticeably e.g. changing the network parameters, settings, and training data. Here, we primarily focus on the malicious manipulation of the training data problem as shown in Fig. 1.b. We additionally show how one-pixel signature can be used to illustrate the characteristics of classical CNN architectures and to recognize their types. The closest work to ours is BadNets but it focuses on presenting the backdoored/Trojan neural network problem. Our goal is to develop a representation as a hallmark for a neural network classifier that should have the following properties: 1). revealing to each network, 2). agnostic to the network architecture, 3). low computational complexity, 4). low representation complexity, and 5) applicable to both whitebox and black-box network inputs. Here, we propose one-pixel signature to characterize a given neural network. Conceptually, we are inspired by the object signature and one-pixel attack methods but these two also have a large difference to our work. Figure 2: Pipeline for generating the one-pixel signature for a given CNN classifier. Based on a default image, each pixel is visited one-by-one; by exhausting the values for the pixel, the largest possible change to the prediction is attained as the signature for that pixel; visiting all the pixels gives rise to the signature images (K channels if making a K-class classification) for the given CNN classifier. See the mathematical definition in Eq. 1. Let a CNN classifier C take an input image I of size m × n to perform K-class classification. Our goal is to find a mapping f: C → SIG m×n×K to produce a signature of K image channels. A signature of classifier C is defined as: A general illustration can been seen in Fig. 2. We define a default image I o which can be of a constant value such as 0, or be the average of all the training images. Let the pixel value of image I be ∈. Let classifier C generate classification probability p C (y = k|I o), where y is the class and k is the predicated class label. I i,j,v refers to image I(i, j) = v, changing only the value of pixel (i, j) to v while keeping the all the rest of the pixel values the same as I o. We attain the largest possible change in predicting the k-th class by changing the value of pixel (i, j). Eq. 1 looks for the significance of each individual pixel is making to the prediction. Since each S (C) is computed independently, this significantly reduces the computation complexity. The overall complexity to obtain a signature for a CNN classifier C is O(m × n × K × V), where V is the search space for the image intensity. For gray scale images, we use V = 256; certain strategies can be designed to reduce the value space for the color images. Detailed algorithm implementation is shown in Appendix as Algorithm. 1. Eq. 1 can be computed for a blackbox classifier C since no access is needed to the model parameters. Fig. 2 illustrates how signature images for classifier C are computed. Note that the definition of S (C) k is not limited to Eq. 1 but we are not expanding the discussion about this topic here. We first briefly describe the neural network backdoor/Trojan attack problem, as discussed in (; tro, 2019). Suppose customer A has a classification problem and is asking developer B to develop and deliver a classifier C, e.g. an AlexNet . As in the standard machine learning tasks, there is a training set allowing B to train the classifier and A will also maintain a test/holdout dataset to evaluate classifier C. Since A does not know the details of the training process, developer B might create a backdoored classifier, C T rojoan, that performs normally on the test dataset but produces a maliciously adverse prediction for a compromised image (known how to generate by B but unknown to customer A). Illustration can be found in Fig. 1 and Fig. 7. We call a regularly trained classifier C clean or CNN clean and a backdoor injected classifier C T rojan or CNN T rojan specifically. Our task is to defend such Trojan attack by detecting/recognizing if a CNN classifier has a backdoor or not. Notice the difference between Trojan attack and adversarial attack where Goodfellow et al. (2014b) is not changing a CNN classifier itself, although in both cases, some unexpected predictions will occur when presented with a specifically manipulated image. There are various ways in which Trojan attack can happen by e.g. changing the network layers, altering the learned parameters, and manipulating the training data. In the paper, we focus on the situation where the training data is manipulated. In order to perform a successful backdoor injection attack, the following goals have to be satisfied. 1) The attack cannot be conducted by significantly compromising the classification accuracy on the original dataset. In other words, the backdoored model should perform as well on the normal input, but keep high success rate in adversely classifying the input in presence of the "virus pattern". 2) The virus pattern should remain relatively insignificant. Fig. 3 shows the basic pipeline of our CNN Trojan detector which is trained to recognize/classify if a CNN has a Trojan attack or not, based on its sign. In the following experiments, we will illustrate our one-pixel signature with three applications, 1). characterization of different CNN architectures, 2). detection of backdoored CNN classifiers, and 3). illustration of a backdoored object detector. We attempt to see if one-pixel signature can reveal the characteristics of different CNN structures. Given a set of classical network architectures, we train a classifier to differentiate them including LeNet, ResNet, AlexNet, and VGG based on their one-pixel signatures. (a) Table. 1. The first two rows include evaluation from 20% of the 1000 models trained with each dataset. The last row shows the for 20% of all 2,000 CNN models trained with mixed datasets. Our suggest that the signature is able to uniquely identify network architectures performing the same task regardless of the dataset it was trained on. (b) (c) (d) (a) LeNet-5, (b) ResNet-8, (c) AlexNet, (d) VGG-10 We show the signatures of five classic CNN architectures trained on ImageNet including: VGG-16 , ResNet-50 , ResNeXt-50 , DenseNet-121 , and MobileNet . The signature of each model is visualized in Fig. 5 for class "tench". We simultaneously update v value for all three channels in the process of signature generation, as the is similar to brute-force search in 3 channels while reducing computational cost. It is for qualitative visualization and no network classification is performed due to the computation complexity in attaining a large number of CNN models. Different characteristics of these classical CNNs can be observed. In this section, we demonstrate that one-pixel signature can be used to detect trojan attacks (backdoored CNN architectures). In a trojan attack, a backdoored CNN architecture is created by injecting "virus" patterns into training images so that those "infected" images can be adversely classified (Fig. 6.b). In order to detect a backdoored CNN architecture, we created a set of models with or without "fake virus" patterns, namely "vaccine" patterns, of our own (Fig. 6.a). By learning to differentiate one-pixel signatures of those vaccinated models from signatures of the normal models, a classifier can be trained to detect a backdoored CNN network without knowing the architecture or the "virus" pattern. We mainly used MNIST dataset for this experiment.: Training and testing data generation for evaluating our Trojan detector as seen in Fig. 3. Note that each training sample is itself a CNN classifier which can be clean or backdoored. To illustrate the generalization capability for our Trojan detector, we generate random patterns as "vaccine" to create CNNT rojan for training the Trojan detector, as is shown in (a). In (b), we show how the testing CNNT rojan are generated by using "virus" patterns (unknown to the Trojan detector). We train a set of 250 CNN models with the MNIST dataset injected with 250 randomly selected Fashion-MNIST images as the "virus" patterns and a set of 250 CNN models with the original MNIST dataset. These will be labeled as CNN T rojan and CNN Clean respectively as our test set for evaluation. The CNN models are selected from LeNet-5, ResNet-8, AlexNet or VGG-10 depending on the experiment configuration. As shown in Fig. 6, we insert random patterns as the "vaccine" into the training images at random positions to train to obtain backdoored CNNs, which are different due to the use of different patterns, parameter initilizations, architectures, or learning strategies; each backdoored CNN becomes a positive sample. Some "vaccine patterns" are displayed in Fig. 6.a. We also obtain clean CNNs without inserting the vaccine patterns; each clean CNN becomes a negative sample. Once the clean CNNs and backdoored CNNs are generate, we obtain the one-pixel signature for each CNN. Now, each CNN is associated with an image set of K channels. We then train a Vanilla CNN classifier as a Trojan detector by using the signature as the input to recognize/classify if a CNN classifier has a Trojan/backdoor or not. This process is illustrated in Fig. 3. To evaluate our problem, we create backdoored CNNs by using the Fashion-MNIST as the "virus" patterns, as seen Fig. 6.b. We first generated a set of 250 randomly generated "vaccine" patterns. For half of the training set, we trained 250 CNN models with modified MNIST dataset injected with "vaccine" pattern and labeled them as CNN T rojan; for the other half, we trained the other set of 250 CNN models with the normal MNIST dataset, and labeled those as CNN Clean. We will use this dataset for training. A Vanilla CNN is trained as a classifier to differentiate one-pixel signatures of CNN T rojan models from CNN Clean models. The classifier is trained on the training set as described in section 5.2.1 and the pipeline is illustrated here in Fig.3. The following are evaluated on the dataset described previously. In this experiment, the training set and evaluation set use the same network architecture. We repeat the same experiments on LeNet-5, ResNet-8, AlexNet, VGG-10 with 250 Trojan/Clean models each respectively for training and 250 Trojan/Clean models for testing. Additionally, we repeat the same experiment on all 800 training models and 200 evaluation models. The evaluation are shown in Table 2. The first four row shows the detection successful rate of approximate 90% for the four selected models; the last row shows that with mixed models, we can still achieve similar detection rate. Our demonstrates that the one-pixel signature succeeds in detecting backdoored models. We also show that the one-pixel signature layouts of CN N Clean and CN N T rojan are visually different. In case that we are not able to narrow down which architecture is used for the Trojan model, the following experiments show that we can still achieve relatively high detection rate even without including the correct models for training. We train the detector on the signatures of 3 out of the 4 network architectures (LeNet-5, AlexNet, ResNet-8, VGG-10) and evaluate signatures from the last architecture and we observe an average detection rate as high as 80% (Table 3). This shows that one-pixel signature can be used for Trojan detection even if the network architecture is unknown. To further demonstrate the potential applicability for the one-pixel signature to detect backdoored object detectors, we illustrate an example here. We extract 6000 images of three classes (person, car, and mobile phone) with 2000 images each from the Open Image Dataset V4 . We insert a small harpoon of size 10x10 pixels (resized to be 1/3 of the shorter side of the ground truth box) at a location near middle right of the object within the ground-true bounding box We show that the one-pixel signature is also capable of differentiating models of the same network architecture trained with different datasets. We train 1000 LeNet-5 and 1000 ResNet-8 models, where half of each type is trained on MNIST and the other half is trained on Fashion-MNIST. 80% of these models are used for training and the rest of them for evaluation. The signatures are extracted and fed into a Vanilla CNN classifier (LeNet-5) and the evaluation were shown in Table. 4. The first 2 rows include from 1000 models with same architecture trained on both dataset. The last row shows the for all 2000 models trained with mixed architecture. Our suggests that the signature can also identify unique dataset being used for the model regardless of the network structure. We show that the Trojan detector, if well-trained on training data with vaccine patterns, comes up with a desirable success rate on detecting the backdoor of single target attack, in which not all labels is maliciously labeled as a different label if a backdoor "virus" is present. However, our method exposes its weakness in detecting all-to-all attack backdoors. Take the MNIST dataset as an example, in an all-to-all attack, one can change the labels of every digit in MNIST i ∈ to i+1 for backdoored inputs. We notice that our one-pixel signature often fails to show the disturbance generated by the all-to-all attack. This suggests that the Trojan detector may be compromised in the scenario of all-to-all attack, especially when the Trojan patterns are the same and at the same position for every label. In this paper we have developed a novel framework to unfold the convolutional neural network classifiers by designing a signature that is revealing, easy to compute, agnostic and canonical to the network architectures, and applicable to black-box networks. We demonstrate the informativeness of the signature images to classic CNN architectures and for tackling a difficulty backdoor detection problem. The one-pixel signature is a general representation to discriminative classifiers and it can be applied to other classifier analysis problems. for j from 0 to n − 1 do 7: for v from 0 to 1; step=1/V do 10: for k from 0 to K-1 do 12: Our one-pixel signature is agnostic of the classifier's architecture and does not need access to network parameters. Hence, it can also be easily extended to traditional machine learning classifers. Fig. 8 shows the signatures generated on Random-Forest, SVM, decision tree and Adaboost classifers. Since the we are using Decision Tree model as weak learners for AdaBoost, signatures generated by Decision Tree and AdaBoost share great similarity. A.3 BENCHMARK ON VARIOUS TROJAN ATTACK STRATEGIES Gu et al shows that one could backdoor a neural networks by poisoning training data. Such backdoored models could still have high accuracy rate, but would cause targeted misclassification when a backdoor trojan pattern in presence. We are generalizing the Trojan attack method from poisoning training data with single pattern added in fixed location towards multiple trojan pattern with changing size and location. We are using MNIST dataset and LeNet-5 model as the benchmark set-up and the of several poisoning strategies were shown in Figure below (Table 5), which indicates that changing the size, change the location with constrain and adding number of patterns would still generate successful backdoor model. Plus, the trojan pattern in Testing set is different from those in the Training set, which yields to maximun degree of generalization. Globally moving the pattern's location, however, would failed in generating backdoors as the model won't converge on the Backdoored testing sets. Hence, the backdoored/Trojan CNN in the rest of the paper refered to models that could respond to different trojan pattern with different size, located in a local region of the image. Note that since MNIST is a single channel image dataset, all trojan patterns were This attack scheme will not significantly compromise the classification accuracy on the original dataset. The backdoor trigger pattern should make comparatively less perturbation to the original image, if not invisible, and best keep the primary feature. The below shows a more fine-grained trojan insertion design. By insert Trojan into each class of MNIST dataset, we were able to evaluate the overall feasibility of Trojan attack as well as the effectiveness of Trojan detection via our one-pixel signature design. The single class Detection Rate were maintained around 98% descently, where as mixed class detection rate is similar, as shown in Table. 6. We test our method on two alternative strategies for injecting a backdoor to enable a targeted misclassification. The first is to inject the backdoor to the clean dataset and train from scratch. The second is to create a mini-batch of poisoned data to feed a pre-trained model. While the ing models generated by two injection methods are both able to function normally and classify good samples accurately (greater than 99% on MNIST with our baseline model), our signatures can also reflect the existence of such backdoor. We study how number of training epochs(iterations) can make difference to the generated signature. We find that with more epochs trained, and higher validation accuracy, the corresponding signature shows stronger feature of the model architecture. Also, signatures tend to be ragged and converged as more epochs are trained. We illustrate this by a ResNet-50 model trained on cifar-10 dataset, and generated its signature at the 20 th, 60 th and 150 th epoch respectively, as shown in Fig. 9. The can be specific to this model, and signatures of different classes would show different features. Table. 7. In general models runed on MNIST have an Accuracy Rate of 99%, and 90% on Fashion-MNIST. Here we present the architecture classification of each two type of CNN classifiers trained on MNIST in Table. A.8. This supplement Table. 1, which contains only 4-class Architecture classifiers. A.9 FRCNN FOR OBJECT LOCALIZATION DETAIL Faster RCNN mainly comprises of three parts: convolution layers to extract appropriate features from the input image; a Regional Proposal Network(RPN) to propose bounding box location and predict the existence of an object; and fully connected neural networks as classifier that takes regional proposals generated by RPN as input to predict object classes and bounding boxes. Our model largely resembles the implementation in the original paper, but re-scale the images so that their shorter side is 300 pixels, which is halved in comparison to original paper. Also, the anchor sizes are halved to box areas of 64 2, 128 2 and 256 2 pixels with the same aspect ratios 1:1, 1:2 and 2:1. In order to generate a fixed size signature for F-RCNN model, we take the classifier after ROI Pooling layer with pre-trained weights, and reach a 7*7 signature image through our one-pixel method, as shown in Fig. 7.
Cnvolutional neural networks characterization for backdoored classifier detection and understanding.
930
scitldr
We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving a mixed integer program. Additionally, by leveraging a known parameterization of the constraint, we modify our method to learn parametric constraints in high dimensions. We show that our method can learn a six-dimensional pose constraint for a 7-DOF robot arm. Inverse optimal control and inverse reinforcement learning (IOC/IRL) BID5 can enable robots to perform complex goaldirected tasks by learning a cost function which replicates the behavior of an expert demonstrator when optimized. However, planning for many robotics and automation tasks also requires knowing constraints, which define what states or trajectories are safe. Existing methods learn local trajectory-based constraints BID3 BID4 or a cost penalty to approximate a constraint BID1, neither of which extracts states that are guaranteed unsafe for all trajectories. In contrast, recent work BID2 recovers a binary representation of globally-valid constraints from expert demonstrations by sampling lower cost (and hence constraintviolating) trajectories and then recovering a constraint consistent with the data by solving an integer program over a gridded constraint space. The learned constraint can be then used to inform a planner to generate safe trajectories connecting novel start and goal states. However, the gridding restricts the scalability of this method to higher dimensional constraints. The contributions of this workshop paper are twofold:• By assuming a known parameterization of the constraint, we extend BID2 to higher dimensions by writing a mixed integer program over parameters which recovers a constraint consistent with the data.• We evaluate the method by learning a 6-dimensional pose constraint on a 7 degree-of-freedom (DOF) robot arm. II. PRELIMINARIES AND PROBLEM STATEMENT We consider a state-control demonstration (ξ * x . = {x 0, . . ., x T}, ξ * u. = {u 0, . . ., u T −1}) which steers a controlconstrained system x t+1 = f (x t, u t, t), u t ∈ U for all t, from a start state x 0 to a goal state x T, while minimizing cost c(ξ x, ξ u) and obeying safety constraints φ(ξ). DISPLAYFORM0 Formally, a demonstration solves the following problem 1: DISPLAYFORM1 are known functions mapping (ξ x, ξ u) to some constraint spaces C andC, where subsets S ⊆ C and S ⊆C are considered safe. In particular,S is known and represents the set of all constraints known to the learner. In this paper, we consider the problem of learning the unsafe set A. DISPLAYFORM2, each with different start and goal states. We assume that the dynamics, control constraints, and start and goal constraints are known and are embedded inφ(ξ x, ξ u) ∈S. We also assume the cost function c(·, ·) is known. BID0 Details for continuous-time and suboptimal demonstrations are in BID2. DISPLAYFORM3 We review BID2, which reduces the ill-posedness of the constraint learning problem by using the insight that each safe, optimal demonstration induces a set of lower-cost trajectories that must be unsafe. These unsafe trajectories are sampled (Section III-A) and used with the demonstrations to reduce the number of consistent unsafe sets. Then, an integer program is used to find a gridded representation of A consistent with both safe and unsafe trajectories (Section III-B). We are interested in sampling from the set of lower-cost trajectories which are dynamically feasible, satisfy the control constraints, and have fixed start and goal state x 0, x T: DISPLAYFORM0 Each trajectory ξ ¬s ∈ A ξ is unsafe, since the optimal demonstrator would have provided any safe lower-cost trajectory, and thus at least one state in ξ ¬s belongs to A. We sample from A ξ using hit-and-run BID0 BID2 (see FIG0, providing a uniform distribution of samples in the limit. Furthermore, if the demonstrator is boundedly suboptimal and satisfies c(ξ DISPLAYFORM1 As the constraint is not assumed to have any parametric structure, the constraint space C is gridded into G cells z 1, . . ., z G, and we recover a safety value for each grid cell O(z i) ∈ {0, 1} which is consistent with the N s safe and N ¬s sampled unsafe trajectories by solving the integer problem: Problem 2 (Grid-based constraint recovery problem). DISPLAYFORM0 Here, O(z i) = 1 if cell z i is considered unsafe, and 0 otherwise. The first constraint restricts all cells that a demonstration passes through to be marked safe, while the second constraint restricts that for each unsafe trajectory, at least one grid cell it passes through is unsafe. Furthermore, denote as G z ¬s the set of guaranteed learned unsafe cells. One can check if cell z i ∈ G z ¬s by checking the feasibility of Problem 2 with an additional constraint that O(z i) = 0 (forcing z i to be safe). Suppose that the unsafe set can be described by some parameterization A(θ). = {k ∈ C | g(k, θ) ≤ 0}, where constraint state k is some element of C, g(·, ·) is known, and θ are parameters to be learned. Then, another feasibility problem analogous to Problem 2 can be written to find a feasible θ consistent with the data: Problem 3 (Parametric constraint recovery problem). DISPLAYFORM0 Denote G s and G ¬s as the set of guaranteed learned safe and unsafe constraint states. One can check if a constraint state k ∈ G ¬s or k ∈ G s by enforcing g(k, θ) > 0 or g(k, θ) ≤ 0, respectively, and checking feasibility of Problem 3. Crucially, G ¬s and G s are guaranteed underapproximations of A and A c (for space, we omit the proof; c.f. BID2).A particularly common parameterization of an unsafe set is as a polytope A(θ) = {k | H(θ)k ≤ h(θ)}, where H(θ) and h(θ) are affine in θ. In this case, θ can be found by solving a mixed integer feasibility problem: Problem 4 (Polytopic constraint recovery problem). DISPLAYFORM1 where M is a large positive number and 1 N h is a column vector of ones of length N h. Constraints (2a) and (2b) use big-M formulations to enforce that each safe constraint state lies outside A(θ) and that at least one constraint state on each unsafe trajectory lies inside A(θ).A few remarks are in order:• If the safe set is a polytope or if the safe set or unsafe set is a union of polytopes, a mixed integer feasibility program similar to Problem 4 can be solved to find θ. A more general case where g(k, θ) is described by a Boolean conjunction of convex inequalities can be solved using satisfiability modulo convex optimization BID6.• In addition to recovering sets of guaranteed learned unsafe and safe constraint states, a probability distribution over possibly unsafe constraint states can be estimated by sampling unsafe sets from the feasible set of Problem 3. V. EVALUATION ON 6D CONSTRAINT In this example, we learn a 6D hyper-rectangular pose constraint for the end effector of a 7-DOF Kuka iiwa arm. In this scenario, the robot's task is to pick up a cup and bring it to a human, all while ensuring the cup's contents do not spill and proxemics constraints are satisfied (i.e. the end effector never gets too close to the human). To this end, the end effector orientation (parametrized in Euler angles) is constrained to satisfy DISPLAYFORM2 DISPLAYFORM3 are generated by solving trajectory optimization problems for the kinematic, discrete-time model in 7D joint space, where for each demonstration T = 6 and control constraints u t ∈ [−2, 2] 7, for all t (see Figures 2, 3). The constraint is recovered with Problem 4, where H(θ) = [I, −I] and h(θ) = θ = [x,ȳ,z,ᾱ,β,γ, x, y, z, α, β, γ]. From this data, Problem 4 is solved in 1.19 seconds on a 2017 Macbook Pro and returns the true θ and G s = S. G s is efficiently recovered using the insight that the axis-aligned bounding box of any two constraint states in G s must be contained in G s, since G s is the union of axis-aligned boxes and therefore must also be an axis-aligned box. VI. In this paper, we extend BID2 to learn higher dimensional constraints by leveraging a known parameterization. We show that the constraint recovery problem for the parameterized case can be solved with mixed integer programming, and evaluate the method on learning a 6D pose constraint for a 7-DOF robot arm. Future work involves using learned constraints for probabilistically safe planning and developing safe exploration strategies and active demonstration-querying strategies to reduce the uncertainty in the learned constraint.
We can learn high-dimensional constraints from demonstrations by sampling unsafe trajectories and leveraging a known constraint parameterization.
931
scitldr
We propose a metric-learning framework for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds. We employ Siamese networks to solve the problem of least squares multidimensional scaling for generating mappings that preserve geodesic distances on the manifold. In contrast to previous parametric manifold learning methods we show a substantial reduction in training effort enabled by the computation of geodesic distances in a farthest point sampling strategy. Additionally, the use of a network to model the distance-preserving map reduces the complexity of the multidimensional scaling problem and leads to an improved non-local generalization of the manifold compared to analogous non-parametric counterparts. We demonstrate our claims on point-cloud data and on image manifolds and show a numerical analysis of our technique to facilitate a greater understanding of the representational power of neural networks in modeling manifold data. The characterization of distance preserving maps is of fundamental interest to the problem of nonlinear dimensionality reduction and manifold learning. For the purpose of achieving a coherent global representation, it is often desirable to embed the high-dimensional data into a space of low dimensionality while preserving the metric structure of the data manifold. The intrinsic nature of the geodesic distance makes such a representation depend only on the geometry of the manifold and not on how it is embedded in ambient space. In the context of dimensionality reduction this property makes the ant embedding meaningful. The success of deep learning has shown that neural networks can be trained as powerful function approximators of complex attributes governing various visual and auditory phenomena. The availability of large amounts of data and computational power, coupled with parallel streaming architectures and improved optimization techniques, have all led to computational frameworks that efficiently exploit their representational power. However, a study of their behavior under geometric constraints is an interesting question which has been relatively unexplored. In this paper, we use the computational infrastructure of neural networks to model maps that preserve geodesic distances on data manifolds. We revisit the classical geometric framework of multidimensional scaling to find a configuration of points that satisfy pairwise distance constraints. We show that instead of optimizing over the individual coordinates of the points, we can optimize over the function that generates these points by modeling this map as a neural network. This makes the complexity of the problem depend on the number of parameters of the network rather than the number of data points, and thus significantly reduces the memory and computational complexities, a property that comes into practical play when the number of data points is large. Additionally, the choice of modeling the isometric map with a parametric model provides a straightforward out-of-sample extension, which is a simple forward pass of the network. We exploit efficient sampling techniques that progressively select landmark points on the manifold by maximizing the spread of their pairwise geodesic distances. We demonstrate that a small amount of these landmark points are sufficient to train a network to generate faithful low-dimensional embeddings of manifolds. Figure 1 provides a visualization of the proposed approach. In the interest of gauging their effectiveness in representing manifolds, we perform a numerical analysis to measure the quality of embedding generated by neural networks and associate an order of accuracy to a given Figure 1: Learning to unfurl a ribbon: A three dimensional Helical Ribbon and its two dimensional embedding learned using a two-layer MLP. The network was trained using estimated pairwise geodesic distances between only 100 points (marked in black) out of the total 8192 samples.architecture. Finally, we demonstrate that parametric models provide better non-local generalization as compared to extrapolation formulas of their non-parametric counter parts. We advocate strengthening the link between axiomatic computation and parametric learning methodologies. Existing MDS frameworks use a geometrically meaningful objective in a cumbersome non-parametric framework. At the other end, learning based methods such as DrLim BID18 use a computationally desirable infrastructure yet a geometrically suboptimal objective requiring too many examples for satisfactory manifold learning. The proposed approach can be interpreted as taking the middle path by using a computationally desirable method of a parametric neural network optimized by the geometrically meaningful cost of multidimensional scaling. The literature on manifold learning is dominated by spectral methods that have a characteristic computational template. The first step involves the computation of the k-nearest neighbors of all N data points. Then, an N × N square matrix is populated using some geometric principle which characterizes the nature of the desired low dimensional embedding. The eigenvalue decomposition of this matrix is used to obtain the low-dimensional representation of the manifold. Laplacian Eigenmaps BID2, , HLLE Donoho & Grimes, Diffusion Maps Coifman et al. (2005 etc. are considered to be local methods, since they are designed to minimize some form of local distortion and hence in embeddings which preserve locality. Methods like ; are considered global because they enforce preserving all geodesic distances in the low dimensional embedding. Local methods lead to sparse matrix eigenvalue problems and hence are computationally advantageous. However, global methods are more robust to noise and achieve globally coherent embeddings, in contrast to the local methods which can sometimes lead to excessively clustered . All spectral techniques are non-parametric in nature and hence do not characterize the map that generates them. Therefore, the computational burden of large spectral decompositions becomes a major drawback when the number of data-points is large. BID3 and BID13 address this issue by providing formulas for out-of-sample extensions to the spectral algorithms. However, these interpolating formulations are computationally inefficient and exhibit poor non-local generalization of the manifold BID4 . Multidimensional scaling (henceforth MDS) is a classical problem in geometry processing and data science and a powerful tool for obtaining a global picture of data when only pairwise distances or dissimilarity information is available. The core idea of MDS is to find an embedding X = x 1, x 2, x 3...x N such that the pairwise distances measured in the embedding space are faithful to the desired distances DISPLAYFORM0 i,j=N i,j=1 as much as possible. There are two prominent versions of MDS: Classical Scaling and Least Squares Scaling. Classical Scaling is based on the observation that the double centering of a pairwise squared distance matrix gives an inner-product matrix which can be factored to obtain the desired embedding. Therefore, if DISPLAYFORM1 T is the centering matrix, classical scaling minimizes the Strain of the embedding configuration X given by DISPLAYFORM2 and is computed conveniently using the eigen-decomposition of the N × N matrix − 1 2 HD s H. At the other end, least squares scaling is based on minimizing the misfits between the pairwise distances of DISPLAYFORM3 i,j=N i,j=1 measured by the Stress function DISPLAYFORM4 In the context of manifold learning and dimensionality reduction, the MDS framework is enabled by estimating all pairwise geodesic distances with a shortest path algorithm like and using the minimizers of Equations 1 and 2 to generate global embeddings by preserving metric properties of the manifold. FORMULA2 and BID5, that used spectral representations of the Laplace-Beltrami Operator of the manifold. BID24 shows a least squares scaling technique which can overcome holes and non-convex boundaries of the manifold. All these algorithms are non-parametric and few out-of-sample extensions have been suggested BID13 to generalize them to new samples. Examining the ability of neural networks to represent data manifolds has received considerable interest and has been studied from multiple perspectives. From the viewpoint of unsupervised parametric manifold learning, one notable approach is based on the metric-learning arrangement of the Siamese configuration BID18 BID6; BID9. Similarly, the parametric version of the Stochastic Neighborhood Embedding van der is another example of using a neural network to generate a parametric map that is trained to preserve local structure BID21. However, these techniques demand an extensive training effort requiring large number of training examples in order to generate satisfactory embeddings. BID30 use a parametric network to learn a classifier which enforces a manifold criterion, requiring nearby points to have similar representations. BID1 have argued that neural-networks can efficiently represent manifolds as monotonic chain of linear segments by providing an architectural construction and analysis. However, they do not address the manifold learning problem and their experiments are based on supervised settings where the ground-truth embedding is known a priori. BID17; BID22 BID10 use neural networks specifically for solving the out-of-sample extension problem for manifold learning. However, their procedure involves training a network to follow a pre-computed non-parametric embedding rather than adopting an entirely unsupervised approach thereby inheriting some of the deficiencies of the non-parametric methods. It is advantageous to adopt a parametric approach to non-linear dimensionality reduction and replace the computational block of the matrix construction and eigenvalue decomposition with a straightforward parametric computation. Directly characterizing the non-linear map provides a simple out-ofsample extension which is a plain forward pass of the network. More importantly, it is expected that the reduced and tightly controlled parameters would lead to an improved non-local generalization of the manifold BID4.In BID18 (henceforth DrLim), it was proposed to use Siamese Networks for manifold learning using the popular hinge-embedding criterion (Equation FORMULA7) as a loss function. A Siamese configuration (Figure 2) comprises of two identical networks that process two separate units of data to achieve output pairs that are compared in a loss function. The contrastive training comprises of DISPLAYFORM0 Figure 2: Siamese Configuration constructing pairs {X DISPLAYFORM1 where λ (k) ∈ {0, 1} is a label for indicating a positive (a neighbor) or negative pair (not a neighbor) by building a nearest neighbor graph from the manifold data. DISPLAYFORM2 Training with the loss in Equation means that at any given update step, a negative pair contributes to the training only when their pairwise distance is less than µ. This leads to a hard-negative sampling problem where the quality of embedding depends on the selection of negative examples in order to prevent excessive clustering of the neighbors. This typically requires an extensive training effort with a huge amount of training data (30000 positive and approximately 17.9 million negatives for a total of 6000 data samples as reported in BID18). We propose to incorporate the ideas of Least Squares Scaling into the computational infrastructure of the Siamese configuration as shown in Figure 2. For every k th pair, we estimate the geodesic distance using a shortest path algorithm and train the network to preserve these distances by minimizing the Stress function DISPLAYFORM0 The advantage of adopting the loss in Equation over Equation FORMULA7 is that every pair of training data contributes to the learning process thereby eliminating the negative sampling problem. More importantly, it facilitates the use of efficient manifold sampling techniques like the Farthest Point Sampling strategy that make it possible to train with much fewer pairs of examples. The farthest point sampling strategy BID7 (also referred to as the MinMax strategy in BID13) is a method to pick landmarks amongst the points of a discretely sampled manifold such that under certain conditions, these samples uniformly cover the manifold much as possible. Starting from a random selection, the landmarks are chosen one at a time such that, each new selection from the unused samples has the largest geodesic distance to the set of the selected sample points. FIG1 provides a visualization of this sampling mechanism. We train the network by minimizing the loss in Equation by computing the pairwise geodesic distances of the landmarks. Therefore, the pre-training computational effort is confined to computing the pairwise geodesic distances of only the landmark points. The proposed geometric manifold learning algorithm can be summarized in two steps, Step1: Compute the nearest-neighbor graph from the manifold data and obtain a set of landmarkpoints and their corresponding pairwise graph/geodesic distances using Dijkstra's algorithm and the Farthest Point Strategy. Step2: Form a dataset of landmark pairs with corresponding geodesic distances {X DISPLAYFORM1} and train network in Siamese configuration using the least-squares MDS loss in Equation FORMULA8 4 EXPERIMENTS 4.1 3D POINT CLOUD DATA Our first set of experiments are based on point-cloud manifolds, e.g., like the Swiss Roll, S-Curve and the Helical Ribbon. We use a multilayer perceptron (MLP) with the PReLU as the non-linear activation function, given by DISPLAYFORM2 where a is a learnable parameter. The networks are trained using the ADAM optimizer with constants (β 1, β 2) = (0.95, 0.99) and a learning rate of 0.01 for 1000 iterations. We run each optimization 5 times with random initialization to ensure convergence. All experiments are implemented in python using the PyTorch framework BID23. We used the scikit-learn machine learning library for the nearest-neighbor and scipy-sparse for Dijkstra's shortest path algorithms. FIG1 show the of our method on the Helical Ribbon and S-Curve respectively with varying number of training samples (in black) out of a total of 8172 data points. The number of landmarks dictates the approximation quality of the low-dimensional embedding generated by the network. Training with too few samples will in inadequate generalization which can be inferred from the corrugations of the unfurled in the first two parts of FIG1 and increasing the number of landmarks expectedly improves the quality of the embedding. We compute the Stress function (Equation FORMULA4) of the entire point configuration to measure the quality of the MDS fit. FIG2 shows the decay in the Stress as a function of the number of training points (or Landmarks) of a 2-Layer MLP.The natural next question to ask is how many landmarks? how many layers? and how many hidden nodes per layer? We observe that these questions relate to an analogous setup in numerical methods for differential equations. For a given numerical technique, the accuracy of the solution depends on the resolution of the spatial grid over which the solution is estimated. Therefore, numerical methods are ranked by an assessment of the order of accuracy their solutions observe. This can be obtained by assuming that the relationship between the approximation error E and the resolution of the grid h is given by DISPLAYFORM3 where P is the order of accuracy of the technique and C is some constant. Therefore, P is obtained by computing the slope of the line obtained by charting log(E) vs log(h), log(E) = log(C) + P log(h).We extend the same principle to evaluate network architectures (in place of numerical algorithms) for estimating the quality of isometric maps. We use the generalized Stress in Equation as the error function for Equation. We assume that due to the 2-approximate property of the farthest point strategy BID19 the sampling is approximately uniform and hence h ∝ DISPLAYFORM4 where K is the number of landmarks. By varying the number of layers and the number of nodes per layer we associate an order of accuracy to each architecture using Equation FORMULA12. FIG2 shows the of our experiment. It shows that a single layer MLP has the capacity for modeling functions upto the first order of accuracy. Adding an additional layer increases the representational power by moving to a second order . Adding more layers does not provide any substantive gain arguably due to a larger likelihood of over-fitting as seen in the considerably noisier estimates (in green). Therefore, a two layer MLP with 70 hidden nodes per layer can be construed as a good architecture for approximating the isometric map of the S-Curve of FIG1 with 200 landmarks. We extend the parametric MDS framework to image articulation manifolds where each sample point is a binary image governed by the modulation of a few parameters. We specifically deal with image manifolds that are isometric to Euclidean space BID16, that is, the geodesic distance between any two sample points is equal to the euclidean distance between their articulation parameters. In the context of the main discussion of this paper, which is metric preserving properties of manifolds, we find that such datasets provide an appropriate test-bed for evaluating metric preserving algorithms. We construct a horizon articulation manifold where each image contains two distinct regions separated by a horizon which is modulated by a linear combination of two fixed sinusoidal basis elements. See Figure 5. DISPLAYFORM0 Thus, each sample has an intrinsic dimensionality of two -the articulation parameters (α 1, α 2) which govern how the sinusoids representing the horizon are mixed. We sample the articulation The proposed method shows maximum fidelity to the ground truth shown in Figure 5 (b) Visualizing the outputs of some filters trained using our method. The 1 st column shows the input images. The filters act to exaggerate (2 nd column) and suppress (3 rd column) a governing frequency in Equation.parameters from a 2D uniform distribution DISPLAYFORM1 We generate 1000 images of the horizon articulation manifold of size 100 × 100. The network architecture comprises of two convolution layers each with kernel sizes 12 and 9, number of kernels 15 and 2 respectively along with a stride of 3 and followed by a fully connected layer mapping the image to a two dimensional entity. We train using the ADAM optimizer BID20 with a learning rate of 0.01 and parameters (β 1, β 2) = (0.95, 0.99). We train the network using 50 Landmark points. Figure 6a shows the comparison between our method and other non-parametric counterparts along with the parametric approach of DrLim. Training with the least squares loss of Equation FORMULA8 shows high fidelity to the ground truth of Figure 5. Except for Isomap and our proposed method, all other methods show some form of distortion indicating a suboptimal metric preservation property. Training with the articulation manifold in Figure5 provides an opportunity to get more detail in understanding the parametric action of the neural networks. The 2 nd and 3 rd columns in Figure 6b show the outputs of some of the filters in the first layer of the architecture trained on the manifold in Figure 5. The distance preserving loss facilitates the learning of image filters which separate the underlying frequencies governing the non-linear manifold, thereby providing a visual validation of the parametric map. We compare our parametric multi-dimensional scaling approach to its direct non-parametric competitor: Landmark-Isomap De BID13. The main idea of Landmark-Isomap is to perform classical scaling on the inter-geodesic distance matrix of only the landmarks and to estimate the embeddings of the remaining points using an interpolating formula (also mentioned in BID3). The formula uses the estimated geodesic distance of each new point to the selected Landmarks in order to estimate its low dimensional embedding. We use the image articulation manifold dataset to provide a quantitative and visual comparison between the two methods. Both the methods are imputed with the same set of landmarks for evaluation. In the first experiment, we generate two independent horizon articulation datasets each containing 1000 samples generated using Equations FORMULA14 and FORMULA15 for training and testing. We then successively train both algorithms on the training dataset with varying number of Landmark points and then use the examples from the test data-set to evaluate performance. FIG4 (a) shows that low dimensional embedding of Landmark-Isomap admits smaller stress values (hence better metric preservation) for the training dataset, but behaves poorly on unseen examples. On the other hand, despite larger stress values for training, the network shows better generalizability. In order to visualize non-local generalization properties, we repeat the previous experiment with a minor modification. We train both algorithms on horizon articulation manifolds with parameters sampled from 0, 0.75] ) and visualize the outputs on test datasets with parameters sampled from DISPLAYFORM0 DISPLAYFORM1 thereby isolating a part of the manifold during training. As shown in FIG4, the output of Landmark-Isomap shows a clustered due to the lack of non-local data in the geodesic distance calculations for the interpolation. In contrast, the network clearly shows a better generalization property. Finally we test our method on a more realistic dataset where the constraint of being isometric to a low dimensional euclidean space is not necessarily strict. We generate 1369 images obtained by smoothly varying the azimuth and elevation of the camera. = 936396 pairs. Our integrated approach yields an improved in a considerably smaller training time (geodesic distances between only 600 2 = 179700 pairs). We used the same 600 landmarks from Isomap and the same architecture of DrLim for generating the embedding in Figure 8c. In the interest of obtaining a better understanding of neural network behavior, we advocate using learning methodologies for solving geometric problems with data by allowing a limited infusion of axiomatic computation to the learning process. In this paper we demonstrate such a scheme by combining parametric modeling with neural networks and the geometric framework of multidimensional scaling. The of this union leads to reduction in training effort and improved local and nonlocal generalization abilities. As future work, we intend to further explore methods that leverage learning methodologies for improving the largely axiomatic setups of numerical algorithms.
Parametric Manifold Learning with Neural Networks in a Geometric Framework
932
scitldr
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a major limitation of counters is their locality. While there are a few model-based solutions to this shortcoming, a model-free approach is still missing. We propose $E$-values, a generalization of counters that can be used to evaluate the propagating exploratory value over state-action trajectories. We compare our approach to commonly used RL techniques, and show that using $E$-values improves learning and performance over traditional counters. We also show how our method can be implemented with function approximation to efficiently learn continuous MDPs. We demonstrate this by showing that our approach surpasses state of the art performance in the Freeway Atari 2600 game. " If there's a place you gotta go -I'm the one you need to know."(Map, Dora The Explorer)We consider Reinforcement Learning in a Markov Decision Process (MDP). An MDP is a fivetuple M = (S, A, P, R, γ) where S is a set of states and A is a set of actions. The dynamics of the process is given by P (s |s, a) which denotes the transition probability from state s to state s following action a. Each such transition also has a distribution R (r|s, a) from which the reward for such transitions is sampled. Given a policy π: S → A, a function -possibly stochastic -deciding which actions to take in each of the states, the state-action value function Q π: S × A → R satisfies: r,s ∼R×P (·|s,a) [r + γQ π (s, π (s))] DISPLAYFORM0 where γ is the discount factor. The agent's goal is to find an optimal policy π * that maximizes Q π (s, π (s)). For brevity, Q π * Q *. There are two main approaches for learning π *. The first is a model-based approach, where the agent learns an internal model of the MDP (namely P and R). Given a model, the optimal policy could be found using dynamic programming methods such as Value Iteration BID19. The alternative is a model-free approach, where the agent learns only the value function of states or state-action pairs, without learning a model BID5 1.The ideas put forward in this paper are relevant to any model-free learning of MDPs. For concreteness, we focus on a particular example, Q-Learning BID23 BID19. Q-Learning is a common method for learning Q *, where the agent iteratively updates its values of Q (s, a) by performing actions and observing their outcomes. At each step the agent takes action a t then it is transferred from s t to s t+1 and observe reward r. Then it applies the update rule regulated by a learning rate α: Q (s t, a t) ← (1 − α) Q (s t, a t) + α r + γ max a Q (s t+1, a). Balancing between Exploration and Exploitation is a major challenge in Reinforcement Learning. Seemingly, the agent may want to choose the alternative associated with the highest expected reward, a behavior known as exploitation. However, in that case it may fail to learn that there are better options. Therefore exploration, namely the taking of new actions and the visit of new states, may also be beneficial. It is important to note that exploitation is also inherently relevant for learning, as we want the agent to have better estimations of the values of valuable state-actions and we care less about the exact values of actions that the agent already knows to be clearly inferior. Formally, to guarantee convergence to Q *, the Q-Learning algorithm must visit each state-action pair infinitely many times. A naive random walk exploration is sufficient for converging asymptotically. However, such random exploration has two major limitations when the learning process is finite. First, the agent would not utilize its current knowledge about the world to guide its exploration. For example, an action with a known disastrous outcome will be explored over and over again. Second, the agent would not be biased in favor of exploring unvisited trajectories more than the visited ones -hence "wasting" exploration resources on actions and trajectories which are already well known to it. A widely used method for dealing with the first problem is the -greedy schema BID19, in which with probability 1 − the agent greedily chooses the best action (according to current estimation), and with probability it chooses a random action. Another popular alternative, emphasizing the preference to learn about actions associated with higher rewards, is to draw actions from a Boltzmann Distribution (Softmax) over the learned Q values, regulated by a Temperature parameter. While such approaches lead to more informed exploration that is based on learning experience, they still fail to address the second issue, namely they are not directed BID20 towards gaining more knowledge, not biasing actions in the direction of unexplored trajectories. Another important approach in the study of efficient exploration is based on Sample Complexity of Exploration as defined in the PAC-MDP literature BID6. Relevant to our work is Delayed Q Learning BID17, a model-free algorithm that has theoretical PAC-MDP guarantees. However, to ensure these theoretical guarantees this algorithm uses a conservative exploration which might be impractical (see also BID7 and Appendix B). In order to achieve directed exploration, the estimation of an exploration value of the different stateactions (often termed exploration bonus) is needed. The most commonly used exploration bonus is based on counting -for each pair (s, a), store a counter C (s, a) that indicates how many times the agent performed action a at state s so far. Counter-based methods are widely used both in practice and in theory BID7 BID16 BID3 BID2. Other options for evaluating exploration include recency and value difference (or error) measures BID20 BID21. While all of these exploration measures can be used for directed exploration, their major limitation in a model-free settings is that the exploratory value of a state-action pair is evaluated with respect only to its immediate outcome, one step ahead. It seems desirable to determine the exploratory value of an action not only by how much new immediate knowledge the agent gains from it, but also by how much more new knowledge could be gained from a trajectory starting with it. The goal of this work is to develop a measure for such exploratory values of state-action pairs, in a model-free settings. The challenge discussed in 1.2 is in fact similar to that of learning the value functions. The value of a state-action represents not only the immediate reward, but also the temporally discounted sum of expected rewards over a trajectory starting from this state and action. Similarly, the "exploration-value" of a state-action should represent not only the immediate knowledge gained but also the expected future gained knowledge. This suggests that a similar approach to that used for value-learning might be appropriate for learning the exploration values as well, using exploration bonus as the immediate reward. However, because it is reasonable to require exploration bonus to decrease over repetitions of the same trajectories, a naive implementation would violate the Markovian property. This challenge has been addressed in a model-based setting: The idea is to use at every step the current estimate of the parameters of the MDP in order to compute, using dynamic programming, the future exploration bonus BID8. However, this solution cannot be implemented in a model-free setting. Therefore, a satisfying approach for propagating directed exploration in model-free reinforcement learning is still missing. In this section, we propose such an approach. We propose a novel approach for directed exploration, based on two parallel MDPs. One MDP is the original MDP, which is used to estimate the value function. The second MDP is identical except for one important difference. We posit that there are no rewards associated with any of the state-actions. Thus, the true value of all state-action pairs is 0. We will use an RL algorithm to "learn" the "actionvalues" in this new MDP which we denote as E-values. We will show that these E-values represent the missing knowledge and thus can be used for propagating directed exploration. This will be done by initializing E-values to 1. These positive initial conditions will subsequently in an optimistic bias that will lead to directed exploration, by giving high estimations only to state-action pairs from which an optimistic outcome has not yet been excluded by the agent's experience. Formally, given an MDP M = (S, A, P, R, γ) we construct a new MDP M = (S, A, P, 0, γ E) with 0 denoting the identically zero function, and 0 ≤ γ E < 1 is a discount parameter. The agent now learns both Q and E values concurrently, while initially E (s, a) = 1 for all s, a. Clearly, E * = 0. However intuitively, the value of E (s, a) at a given timestep during training stands for the knowledge, or uncertainty, that the agent has regarding this state-action pair. Eventually, after enough exploration, there is no additional knowledge left to discover which corresponds to E (s, a) → E * (s, a) = 0.For learning E, we use the SARSA algorithm BID13 BID19 which differs from Watkin's Q-Learning by being on-policy, following the update rule: DISPLAYFORM0 Where α E is the learning rate. For simplicity, we will assume throughout the paper that α E = α. Note that this learning rule updates the E-values based on E (s t+1, a t+1) rather than max a E (s t+1, a), thus not considering potentially highly informative actions which are never selected. This is important for guaranteeing that exploration values will decrease when repeating the same trajectory (as we will show below). Maintaining these additional updates doesn't affect the asymptotic space/time complexity of the learning algorithm, since it is simply performing the same updates of a standard Q-Learning process twice. The logarithm of E-Values can be thought of as a generalization of visit counters, with propagation of the values along state-action pairs. To see this, let us examine the case of γ E = 0 in which there is no propagation from future states. In this case, the update rule is given by: DISPLAYFORM0 So after being visited n times, the value of the state-action pair is (1 − α) n, where α is the learning rate. By taking a logarithm transformation, we can see that log 1−α (E) = n. In addition, when s is a terminal state with one action, log 1−α (E) = n for any value of γ E. When γ E > 0 and for non-terminal states, E will decrease more slowly and therefore log 1−α E will increase more slowly than a counter. The exact rate will depend on the MDP, the policy and the specific value of γ E. Crucially, for state-actions which lead to many potential states, each visit contributes less to the generalized counter, because more visits are required to exhaust the potential outcomes of the action. To gain more insight, consider the MDP depicted in FIG0 left, a tree with the root as initial state and the leaves as terminal states. If actions are chosen sequentially, one leaf after the other, we expect that each complete round of choices (which will with k actual visits of the (s, start) pair) will be roughly equivalent to one generalized counter. Simulation of this and other simple MDPs show that E-values behave in accordance with such intuitions (see FIG0 right).An important property of E-values is that they decrease over repetitions. Formally, by completing a trajectory of the form s 0, a 0,..., s n, a n, s 0, a 0 in the MDP, the maximal value of E (s i, a i) will decrease. To see this, assume that E (s i, a i) was maximal, and consider its value after the update: DISPLAYFORM1, we get that after the update, the value of E (s i, a i) decreased. For any non-maximal (s j, a j), its value after the update is a convex combination of its previous value and γ E E (s k, a k) which is not larger than its composing terms, which in turn are smaller than the maximal E-value. The logarithm of E-values can be considered as a generalization of counters. As such, algorithms that utilize counters can be generalized to incorporate E-values. Here we consider two such generalizations. In model-based RL, counters have been used to create an augmented reward function. Motivated by this , augmenting the reward with a counter-based exploration bonus has also been used in model-free RL BID15 BID1. E-Values can naturally generalize this approach, by replacing the standard counter with its corresponding generalized counter (log 1−α E).To demonstrate the advantage of using E-values over standard counters, we tested an -greedy agent with an exploration bonus of 1 log 1−α E added to the observed reward on the bridge MDP (Figure 2). To measure the learning progress and its convergence, we calculated the mean square error * on optimal policy per episode. Convergence of -greedy on the short bridge environment (k = 5) with and without exploration bonuses added to the reward. Note the logarithmic scale of the abscissa. DISPLAYFORM0, where the average is over the probability of state-action pairs when following the optimal policy π *. We varied the value of γ E from 0 -ing effectively in standard counters -to γ E = 0.9. Our (Figure 3) show that adding the exploration bonus to the reward leads to faster learning. Moreover, the larger the value of γ E in this example the faster the learning, demonstrating that generalized counters significantly outperforming standard counters. Another way in which counters can be used to assist exploration is by adding them to the estimated Q-values. In this framework, action-selection is a function not only of the Q-values but also of the counters. Several such action-selection rules have been proposed BID20 BID9 BID7 ). These usually take the form of a deterministic policy that maximizes some combination of the estimated Q-value with a counter-based exploration bonus. It is easy to generalize such rules using E-values -simply replace the counters C by the generalized counters log 1−α (E). Here, we consider a special family of action-selection rules that are derived as deterministic equivalents of standard stochastic rules. Stochastic action-selection rules are commonly used in RL. In their simple form they include rules such as the -greedy or Softmax exploration described above. In this framework, exploratory behavior is achieved by stochastic action selection, independent of past choices. At first glance, it might be unclear how E-values can contribute or improve such rules. We now turn to show that, by using counters, for every stochastic rule there exist equivalent deterministic rules. Once turned to deterministic counter-based rules, it is again possible improve them using E-values. The stochastic action-selection rules determine the frequency of choosing the different actions in the limit of a large number of repetitions, while abstracting away the specific order of choices. This fact is a key to understanding the relation between deterministic and stochastic rules. An equivalence of two such rules can only be an in-the-limit equivalence, and can be seen as choosing a specific realization of sample from the distribution. Therefore, in order to derive a deterministic equivalent of a given stochastic rule, we only have to make sure that the frequencies of actions selected under both rules are equal in the limit of infinitely many steps. As the probability for each action is likely to depend on the current Q-values, we have to consider fixed Q-values to define this equivalence. We prove that given a stochastic action-selection rule f (a|s), every deterministic policy that does not choose an action that was visited too many times until now (with respect to the expected number according to the probability distribution) is a determinization of f. Formally, lets assume that given a certain Q function and state s we wish a certain ratio between different choices of actions a ∈ A to hold. We denote the frequency of this ratio f Q (a|s). For brevity we assume s and Q are constants and denote f Q (a|s) = f (a). We also assume a counter C (s, a) is kept denoting the number of choices of a in s. For brevity we denote C (s, a) = C (a) and a C (s, a) = C. When we look at the counters after T steps we use subscript C T (a). Following this notation, note that C T = T. Theorem 3.1. For any sub-linear function b (t) and for any deterministic policy which chooses at step T an action a such that DISPLAYFORM0 Proof. For a full proof of the theorem see Appendix A in the supplementary materialsThe above is not a vacuous truth -we now provide two possible determinization rules that achieves it. One rule is straightforward from the theorem, using b = 0, choosing arg min a C(a)C − f (a). Another rule follows the probability ratio between the stochastic policy and the empirical distribution: arg max a f (a) C(a). We denote this determinization LLL, because when generalized counters are used instead of counters it becomes arg max a logf (s, a) − loglog 1−α E (s, a). Now we can replace the visit counters C (s, a) with the generalized counters log 1−α (E (s, a)) to create Directed Outreaching Reinforcement Action-Selection -DORA the explorer. By this, we can transform any stochastic or counter-based action-selection rule into a deterministic rule in which exploration propagates over the states and the expected trajectories to follow. Input: Stochastic action-selection rule f, learning rate α, Exploration discount factor γ E initialize Q (s, a) = 0, E (s, a) = 1; foreach episode do init s; while not terminated do Choose a = arg max x log f Q (x|s) − log log 1−α E (s, x); Observe transitions (s, a, r, s, a); DISPLAYFORM1 Algorithm 1: DORA algorithm using LLL determinization for stochastic policy f To test this algorithm, the first set of experiments were done on Bridge environments of various lengths k (Figure 2). We considered the following agents: -greedy, Softmax and their respective LLL determinizations (as described in 3.2.1) using both counters and E-values. In addition, we compared a more standard counter-based agent in the form of a UCB-like algorithm BID0 following an action-selection rule with exploration bonus of log t C. We tested two variants of this algorithm, using ordinary visit counters and E-values. Each agent's hyperparameters (and temperature) were fitted separately to optimize learning. For stochastic agents, we averaged the over 50 trials for each execution. Unless stated otherwise, γ E = 0.9.We also used a normalized version of the bridge environment, where all rewards are between 0 and 1, to compare DORA with the Delayed Q-Learning algorithm BID17.Our FIG2 demonstrate that E-value based agents outperform both their counter-based and their stochastic equivalents on the bridge problem. As shown in FIG2, Stochastic and counter-based -greedy agents, as well as the standard UCB fail to converge. E-value agents are the first to reach low error values, indicating that they learn faster. Similar were achieved The success of E-values based learning relative to counter based learning implies that the use of E-values lead to more efficient exploration. If this is indeed the case, we expect E-values to better represent the agent's missing knowledge than visit counters during learning. To test this hypothesis we studied the behavior of an E-value LLL Softmax on a shorter bridge environment (k = 5). For a given state-action pair, a measure of the missing knowledge is the normalized distance between its estimated value (Q) and its optimal-policy value (Q *). We recorded C, log 1−α (E) and Q−Q * Q * for each s, a at the end of each episode. Generally, this measure of missing knowledge is expected to be a monotonously-decreasing function of the number of visits (C). This is indeed true, as depicted in FIG3 (left). However, considering all state-action pairs, visit counters do not capture well the amount of missing knowledge, as the convergence level depends not only on the counter but also on the identity of the state-action it counts. By contrast, considering the convergence level as a function of the generalized counter (FIG3, right) reveals a strikingly different pattern. Independently of the state-action identity, the convergence level is a unique function of the generalized counter. These demonstrate that generalized counters are a useful measure of the amount of missing knowledge. So far we discussed E-values in the tabular case, relying on finite (and small) state and action spaces. However, a main motivation for using model-free approach is that it can be successfully applied in large MDPs where tabular methods are intractable. In this case (in particular for continuous MDPs), achieving directed exploration is a non-trivial task. Because revisiting a state or a state-action pair is unlikely, and because it is intractable to store individual values for all state-action pairs, counterbased methods cannot be directly applied. In fact, most implementations in these cases adopt simple exploration strategies such as -greedy or softmax BID1.There are standard model-free techniques to estimate value function in function-approximation scenarios. Because learning E-values is simply learning another value-function, the same techniques can be applied for learning E-values in these scenarios. In this case, the concept of visit-countor a generalized visit-count -will depend on the representation of states used by the approximating function. To test whether E-values can serve as generalized visit-counters in the function-approximation case, we used a linear approximation architecture on the MountainCar problem (Appendix C). To dissociate Q and E-values, actions were chosen by an -greedy agent independently of Evalues. As shown in Appendix C, E-values are an effective way for counting both visits and generalized visits in continuous MDPs. For completeness, we also compared the performance of LLL agents to stochastic agents on a sparse-reward MountainCar problem, and found that LLL agents learns substantially faster than the stochastic agents (Appendix D). To show our approach scales to complex problems, we used the Freeway Atari 2600 game, which is known as a hard exploration problem BID1. We trained a neural network with two streams to predict the Q and E-values. First, we trained the network using standard DQN technique BID10, which ignores the E-values. Second, we trained the network while adding an exploration bonus of β √ − log E to the reward (In all reported simulations, β = 0.05). In both cases, action-selection was performed by an -greedy rule, as in BID1.Note that the exploration bonus requires 0 < E < 1. To satisfy this requirement, we applied a logistic activation fucntion on the output of the last layer of the E-value stream, and initialized the weights of this layer to 0. As a , the E-values were initialized at 0.5 and satisfied 0 < E < 1 throughout the training. In comparison, no non-linearity was applied in the last layer of the Q-value stream and the weights were randmoly initialized. We compared our approach to a DQN baseline, as well as to the density model counters suggested by BID1. The baseline used here does not utilize additional enhancements (such as Double DQN and Monte-Carlo return) which were used in BID1. Our , depicted in FIG4, demonstrate that the use of E-values outperform both DQN and density model counters baselines. In addition, our approach in better performance than in BID1 (with the mentioned enhancements), converging in approximately 2 · 10 6 steps, instead of 10 · 10 6 steps 2. The idea of using reinforcement-learning techniques to estimate exploration can be traced back to BID15 and BID9 who also analyzed propagation of uncertainties and exploration values. These works followed a model-based approach, and did not fully deal with the problem of non-Markovity arising from using exploration bonus as the immediate reward. A related approach was used by BID8, where exploration was investigated by information-theoretic measures. Such interpretation of exploration can also be found in other works BID14; BID18 BID4 ).Efficient exploration in model-free RL was also analyzed in PAC-MDP framework, most notably the Delayed Q Learning algorithm by BID17. For further discussion and comparison of our approach with Delayed Q Learning, see 1.1 and Appendix B.In terms of generalizing Counter-based methods, there has been some works on using counter-like notions for exploration in continuous MDPs BID12. A more direct attempt was recently proposed by BID1. This generalization provides a way to implement visit counters in large, continuous state and action spaces by using density models. Our generalization is different, as it aims first on generalizing the notion of visit counts themselves, from actual counters to "propagating counters". In addition, our approach does not depend on any estimated model -which might be an advantage in domains for which good density models are not available. Nevertheless, we believe that an interesting future work will be comparing between the approach suggested by BID1 and our approach, in particular for the case of γ E = 0. The proof for the determinization mentioned in the paper is achieved based on the following lemmata. Lemma A.1. The absolute sum of positive and negative differences between the empiric distribution (deterministic frequency) and goal distribution (non-deterministic frequency) is equal. DISPLAYFORM0 Figure 7: Normalized MSE between Q and Q * on optimal policy per episode. Convergence of E-value LLL and Delayed Q-Learning on, normalized bridge environment (k = 15). MSE was noramlized for each agent to enable comparison. Because Delayed Q learning initializes its values optimistically, which in a high MSE, we normalized the MSE of the two agents (separately) to enable comparison. Notably, to achieve this performance by the Delayed Q Learning, we had to manually choose a low value for m (in Figure 7, m = 10), the hyperparameter regulating the number of visits required before any update. This is an order of magnitude smaller than the theoretical value required for even moderate PAC-requirements in the usual notion of, δ, such m also implies learning in orders of magnitudes slower. In fact, for this limit of m → 1 the algorithm is effectively quite similar to a "Vanilla" Q-Learning with an optimistic initialization, which is possible due to the assumption made by the algorithm that all rewards are between 0 and 1. In fact, several exploration schemes relying on optimism in the face of uncertainty were proposed BID22 ). However, because our approach separate reward values and exploratory values, we are able to use optimism for the latter without assuming any prior knowledge about the first -while still achieving competitive to an optimistic initialization based on prior knowledge. To gain insight into the relation between E-values and number of visits, we used the linearapproximation architecture on the MountainCar problem. Note that when using E-values, they are generally correlated with visit counts both because visits in update of the E-values through learning and because E-values affect visits through the exploration bonus (or action-selection rule). To dissociate the two, Q-values and E-values were learned in parallel in these simulation, but actionselection was independent of the E-values. Rather, actions were chosen by an -greedy agent. To estimate visit-counts, we recorded the entire set of visited states, and computed the empirical visits histogram by binning the two-dimensional state-space. For each state, its visit counter estimator C (s) is the value of the matching bin in the histogram for this state. In addition, we recorded the learned model (weights vector for E-values) and computed the E-values map by sampling a state for each bin, and calculating its E-values using the model. For simplicity, we consider here the resolution of states alone, summing over all 3 actions for each state. That is, we compareC (s) to a log 1−α E (s, a) = C E (s). FIG5 depicts the empirical visits histogram (left) and the estimated E-values for the case of γ E = 0 after the complete training. The of the analysis show that, roughly speaking, those regions in the state space that were more often visited, were also associated with a higher C E (s). To better understand these , we considered smaller time-windows in the learning process. Specifically, FIG6 depicts the empirical visit histogram (left), and the corresponding C E (s) (right) in the first 10 episodes, in which visits were more centrally distributed. FIG0 depicts the change in the empirical visit histogram (left), and change in the corresponding C E (s) (right) in the last 10 episodes of the training, in which visits were distributed along a spiral (forming an nearoptimal behavior). These demonstrate high similarity between visit-counts and the E-value representation of them, indicating that E-values are good proxies of visit counters. The depicted in Figures 9 and 10 were achieved with γ E = 0. For γ E > 0, we expect the generalized counters (represented by E-values) to account not for standard visits but for "generalized visits", weighting the trajectories starting in each state. We repeated the analysis of FIG0 for the case of γ E = 0.99. Results, depicted in FIG0, shows that indeed for terminal or nearterminal states (where position> 0.5) generalized visits, measured by difference in their generalized counters, are higher -comparing to far-from terminal states -than the empirical visits of these states (comparing to far-from terminal states). To quantify the relation between visits and E-values, we densely sampled the (achievable) statespace to generate many examples of states. For each sampled state, we computed the correlation coefficient between C E (s) andC (s) throughout the learning process (snapshots taken each 10 episodes). The valuesC (s) were estimated by the empirical visits histogram (value of the bin corresponding to the sampled state) calculated based on visits history up to each snapshot. FIG0, depicting the histogram of correlation coefficients between the two measures, demonstrating strong positive correlations between empirical visit-counters and generalized counters represented by E-values. These indicate that E-values are an effective way for counting effective visits in continuous MDPs. Note that the number of model parameters used to estimate E (s, a) in this case is much smaller than the size of the table we would have to use in order to track state-action counters in such binning resolution. To test the performance of E-values based agents, simulations were performed using the MountainCar environment. The version of the problem considered here is with sparse and delayed reward, meaning that there is a constant reward of 0 unless reaching a goal state which provides a reward of magnitude 1. Episode length was limited to 1000 steps. We used linear approximation with tilecoding features BID19, learning the weights vectors for Q and E in parallel. To guarantee that E-values are uniformly initialized and are kept between 0 and 1 throughout learning, we initialized the weights vector for E-values to 0 and added a logistic non-linearity to the of the standard linear approximation. In contrast, the Q-values weights vector was initialized at random, and there was no non-linearity. We compared the performance of several agents. The first two used only Q-values, with a softmax or an -greedy action-selection rules. The other two agents are the DORA variants using both Q and E values, following the LLL determinization for softmax either with γ E = 0 or with γ E = 0.99. Parameters for each agent (temperature and) were fitted separately to maximize performance. The depicted in FIG0 demonstrate that using E-values with γ E > 0 lead to better performance in the MountainCar problem In addition we tested our approach using (relatively simple) neural networks. We trained two neural networks in parallel (unlike the two-streams single network used for Atari simulations), for predicting Q and E values. In this architecture, the same technique of 0 initializing and a logistic non-linearity was applied to the last linear of the E-network. Similarly to the linear approximation approach, E-values based agents outperform their -greedy and softmax counterparts (not shown).Figure 13: Probability of reaching goal on MountainCar (computed by averaging over 50 simulations of each agent), as a function of training episodes. While Softmax exploration fails to solve the problem within 1000 episodes, LLL E-values agents with generalized counters (γ E > 0) quickly reach high success rates.
We propose a generalization of visit-counters that evaluate the propagating exploratory value over trajectories, enabling efficient exploration for model-free RL
933
scitldr
Deep neuroevolution and deep reinforcement learning (deep RL) algorithms are two popular approaches to policy search. The former is widely applicable and rather stable, but suffers from low sample efficiency. By contrast, the latter is more sample efficient, but the most sample efficient variants are also rather unstable and highly sensitive to hyper-parameter setting. So far, these families of methods have mostly been compared as competing tools. However, an emerging approach consists in combining them so as to get the best of both worlds. Two previously existing combinations use either an ad hoc evolutionary algorithm or a goal exploration process together with the Deep Deterministic Policy Gradient (DDPG) algorithm, a sample efficient off-policy deep RL algorithm. In this paper, we propose a different combination scheme using the simple cross-entropy method (CEM) and Twin Delayed Deep Deterministic policy gradient (TD3), another off-policy deep RL algorithm which improves over DDPG. We evaluate the ing method, CEM-RL, on a set of benchmarks classically used in deep RL. We show that CEM-RL benefits from several advantages over its competitors and offers a satisfactory trade-off between performance and sample efficiency. Policy search is the problem of finding a policy or controller maximizing some unknown utility function. Recently, research on policy search methods has witnessed a surge of interest due to the combination with deep neural networks, making it possible to find good enough continuous action policies in large domains. From one side, this combination gave rise to the emergence of efficient deep reinforcement learning (deep RL) techniques BID17 BID25. From the other side, evolutionary methods, and particularly deep neuroevolution methods applying evolution strategies (ESs) to the parameters of a deep network have emerged as a competitive alternative to deep RL due to their higher parallelization capability BID23.Both families of techniques have clear distinguishing properties. Evolutionary methods are significantly less sample efficient than deep RL methods because they learn from complete episodes, whereas deep RL methods use elementary steps of the system as samples, and thus exploit more information BID27. In particular, off-policy deep RL algorithms can use a replay buffer to exploit the same samples as many times as useful, greatly improving sample efficiency. Actually, the sample efficiency of ESs can be improved using the "importance mixing" mechanism, but a recent study has shown that the capacity of importance mixing to improve sample efficiency by a factor of ten is still not enough to compete with off-policy deep RL BID22. From the other side, sample efficient off-policy deep RL methods such as the DDPG algorithm BID17 are known to be unstable and highly sensitive to hyper-parameter setting. Rather than opposing both families as competing solutions to the policy search problem, a richer perspective consists in combining them so as to get the best of both worlds. As covered in Section 2, there are very few attempts in this direction so far. After presenting some in Section 3, we propose in Section 4 a new combination method that combines the cross-entropy method (CEM) with DDPG or TD3, an off-policy deep RL algorithm which improves over DDPG. In Section 5, we investigate experimentally the properties of this CEM-RL method, showing its advantages both over the components taken separately and over a competing approach. Beyond the of CEM-RL, the of this work is that there is still a lot of unexplored potential in new combinations of evolutionary and deep RL methods. Policy search is an extremely active research domain. The realization that evolutionary methods are an alternative to continuous action reinforcement learning and that both families share some similarity is not new BID29 b; 2013) but so far most works have focused on comparing them BID24. Under this perspective, it was shown in BID6 that, despite its simplicity with respect to most deep RL methods, the Cross-Entropy Method (CEM) was a strong baseline in policy search problems. Here, we focus on works which combine both families of methods. Synergies between evolution and reinforcement learning have already been investigated in the context of the so-called Baldwin effect BID28. This literature is somewhat related to research on meta-learning, where one seeks to evolve an initial policy from which a self-learned reinforcement learning algorithm will perform efficient improvement (; BID12 BID9 . The key difference with respect to the methods investigated here is that in this literature, the outcome of the RL process is not incorporated back into the genome of the agent, whereas here evolution and reinforcement learning update the same parameters in iterative sequences. Closer to ours, the work of BID3 sequentially applies a goal exploration process (GEP) to fill a replay buffer with purely exploratory trajectories and then applies DDPG to the ing data. The GEP shares many similarities with evolutionary methods, though it focuses on diversity rather than on performance of the learned policies. The authors demonstrate on the Continuous Mountain Car and HALF-CHEETAH-V2 benchmarks that their combination, GEP-PG, is more sample-efficient than DDPG, leads to better final solutions and induces less variance during learning. However, due to the sequential nature of the combination, the GEP part does not benefit from the efficient gradient steps of the deep RL part. Another approach related to ours is the work of BID18, where the authors introduce optimization problems with a surrogate gradient, i.e. a direction which is correlated with the real gradient. They show that by modifying the covariance matrix of an ES to incorporate the informations contained in the surrogate, a hybrid algorithm can be constructed. They provide a thorough theoretical investigation of their procedure, which they experimentally show capable of outperforming both a standard gradient descent method and a pure ES on several simple benchmarks. They argue that this method could be useful in RL, since surrogate gradients appear in Q-learning and actor-critic methods. However, a practical demonstration of those claims remains to be performed. Their approach resembles ours in that they use a gradient method to enhance an ES. But a notable difference is that they use the gradient information to directly change the distribution from which samples are drawn, whereas we use gradient information on the samples themselves, impacting the distribution only indirectly. The work which is the closest to ours is BID14. The authors introduce an algorithm called ERL (for Evolutionary Reinforcement Learning), which is presented as an efficient combination of a deep RL algorithm, DDPG, and a population-based evolutionary algorithm. It takes the form of a population of actors, which are constantly mutated and selected in tournaments based on their fitness. In parallel, a single DDPG agent is trained from the samples generated by the evolutionary population. This single agent is then periodically inserted into the population. When the gradient-based policy improvement mechanism of DDPG is efficient, this individual outperforms its evolutionary siblings, it gets selected into the next generation and draws the whole population towards higher performance. Through their experiments, Khadka & Tumer demonstrate that this setup benefits from an efficient transfer of information between the RL algorithm and the evolutionary algorithm, and vice versa. However, their combination scheme does not make profit of the search efficiency of ESs. This is unfortunate because ESs are generally efficient evolutionary methods, and importance mixing can only be applied in their context to bring further sample efficiency improvement. By contrast with the works outlined above, the method presented here combines CEM and TD3 in such a way that our algorithm benefits from the gradient-based policy improvement mechanism of TD3, from the better stability of ESs, and may even benefit from the better sample efficiency brought by importance mixing, as described in Appendix B. In this section, we provide a quick overview of the evolutionary and deep RL methods used throughout the paper. Evolutionary algorithms manage a limited population of individuals, and generate new individuals randomly in the vicinity of the previous elite individuals. There are many variants of such algorithms, some using tournament selection as in BID14, niche-based selection or more simply taking a fraction of elite individuals, see BID1 for a broader view. Evolution strategies can be seen as specific evolutionary algorithms where only one individual is retained from one generation to the next, this individual being the mean of the distribution from which new individuals are drawn. More specifically, an optimum individual is computed from the previous samples and the next samples are obtained by adding Gaussian noise to the current optimum. Finally, among ESs, Estimation of Distribution Algorithms (EDAs) are a specific family where the population is represented as a distribution using a covariance matrix Σ BID16 ). This covariance matrix defines a multivariate Gaussian function and samples at the next iteration are drawn according to Σ. Along iterations, the ellipsoid defined by Σ is progressively adjusted to the top part of the hill corresponding to the local optimum θ *. Various instances of EDAs, such as the CrossEntropy Method (CEM), Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and PI 2 -CMA, are covered in BID29 b; 2013). Here we focus on the first two. The Cross-Entropy Method (CEM) is a simple EDA where the number of elite individuals is fixed to a certain value K e (usually set to half the population). After all individuals of a population are evaluated, the K e fittest individuals are used to compute the new mean and variance of the population, from which the next generation is sampled after adding some extra variance to prevent premature convergence. In more details, each individual x i is sampled by adding Gaussian noise around the mean of the distribution µ, according to the current covariance matrix Σ, i.e. x i ∼ N (µ, Σ). The problemdependent fitness of these new individuals (f i) i=1,...,λ is computed, and the top-performing K e individuals, (z i) i=1,...,Ke are used to update the parameters of the distribution as follows: DISPLAYFORM0 DISPLAYFORM1 where (λ i) i=1,...,Ke are weights given to the individuals, commonly chosen with BID10. In the former, each individual is given the same importance, whereas the latter gives more importance to better individuals. BID0. DISPLAYFORM2 A minor difference between CEM and CMA-ES can be found in the update of the covariance matrix. In its standard formulation, CEM uses the new estimate of the mean µ to compute the new Σ, whereas CMA-ES uses the current µ (the one that was used to sample the current generation) as is the case in. We used the latter as BID10 shows it to be more efficient. The algorithm we are using can thus be described either as CEM using the current µ for the estimation of the new Σ, or as CMA-ES without evolutionary paths. The difference being minor, we still call the ing algorithm CEM. Besides, we add some noise in the form of I to the usual covariance update to prevent premature convergence. We choose to have an exponentially decaying, by setting an initial and a final standard deviation, respectively σ init and σ end, initializing to σ init and updating at each iteration with = τ cem + (1 − τ cem)σ end.Note that, in practice Σ can be too large for computing the updates and sampling new individuals. Indeed, if n denotes the number of actor parameters, simply sampling from Σ scales at least in O(n 2.3), which becomes quickly intractable. Instead, we constrain Σ to be diagonal. This means that in our computations, we replace the update in by DISPLAYFORM3 where the square of the vectors denote the vectors of the square of the coordinates. The Deep Deterministic Policy Gradient (DDPG) BID17 and Twin Delayed Deep Deterministic policy gradient (TD3) BID8 algorithms are two off-policy, actorcritic and sample efficient deep RL algorithms. The DDPG algorithm suffers from instabilities partly due to an overestimation bias in critic updates, and is known to be difficult to tune given its sensitivity to hyper-parameter settings. The availability of properly tuned code baselines incorporating several advanced mechanisms improves on the latter issue. The TD3 algorithm rather improves on the former issue, limiting the over-estimation bias by using two critics and taking the lowest estimate of the action values in the update mechanisms. As shown in FIG0, the CEM-RL method combines CEM with either DDPG or TD3, giving rise to two algorithms named CEM-DDPG and CEM-TD3. The mean actor of the CEM population, referred to as π µ, is first initialized with a random actor network. A unique critic network Q π managed by TD3 or DDPG is also initialized. At each iteration, a population of actors is sampled by adding Gaussian noise around the current mean π µ, according to the current covariance matrix Σ. Half of the ing actors are directly evaluated. The corresponding fitness is computed as the cumulative reward obtained during an episode in the environment. Then, for each actor of the other half of the population, the critic is updated using this actor and, reciprocally, the actor follows the direction of the gradient given by the critic Q π for a fixed number of steps. The ing actors are evaluated after this process. The CEM algorithm then takes the top-performing half of the ing global population to compute its new π µ and Σ. The steps performed in the environment used to evaluate all actors in the population are fed into the replay buffer. The critic is trained from that buffer pro rata to the quantity of new information introduced in the buffer at the current generation. For instance, if the population contains 10 individuals, and if each episode lasts 1000 time steps, then 10,000 new samples are introduced in the replay buffer at each generation. The critic is thus trained for 10,000 mini-batches, which are divided into 2000 mini-batches per learning actor. This is a common practice in deep RL algorithms, where one mini-batch update is performed for each step of the actor in the environment. We also choose this number of steps to be the number of gradient steps taken by half of the population at the next iteration. A pseudo-code of CEM-RL is provided in Algorithm 1.In cases where applying the gradient increases the performance of the actor, CEM benefits from this increase by incorporating the corresponding actors in its computations. By contrast, in cases where the gradient steps decrease performance, the ing actors are ignored by CEM, which instead focuses on standard samples around π µ. Those poor samples do not bring new insight on the current distribution of the CEM algorithm, since the gradient steps takes them away from the current distribution. However, since all evaluated actors are filling the replay buffer, the ing experience is still fed to the critic and the future learning actors, providing some supplementary exploration. This approach generates a beneficial flow of information between the deep RL part and the evolutionary part. Indeed, on one hand, good actors found by following the current critic directly improve the evolutionary population. On the other hand, good actors found through evolution fill the replay buffer from which the RL algorithm learns. In that respect, our approach benefits from the same properties as the ERL algorithm BID14 ) depicted in FIG0. But, by contrast with BID14, gradient steps are directly applied to several samples, and using the CEM algorithm makes it possible to use importance mixing, as described in Appendix B. Another difference is that in CEM-RL gradient steps are applied at each iteration whereas in ERL, a deep RL actor is only injected to the population from time to time. One can also see from FIG0 that, in contrast to ERL, CEM-RL does not use any deep RL actor. Other distinguishing properties between ERL and CEM-RL are discussed in the light of empirical in Section 5.2.Finally, given that CMA-ES is generally considered as more sophisticated than CEM, one may wonder why we did not use CMA-ES instead of CEM into the CEM-RL algorithm. Actually, the key contribution of CMA-ES with respect to CEM consists of the evolutionary path mechanism (see Section 3.2), but this mechanism in some inertia in Σ updates, which resists to the beneficial effect of applying RL gradient steps. In this section, we study the CEM-RL algorithm to answer the following questions:• How does the performance of CEM-RL compare to that of CEM and TD3 taken separately?What if we remove the CEM mechanism, ing in a multi-actor TD3? Require: max steps, the maximum number of steps in the environment τ CEM, σ init, σ end and pop size, hyper-parameters of the CEM algorithm γ, τ, lr actor and lr critic, hyper-parameters of DDPG 1: Initialize a random actor π µ, to be the mean of the CEM algorithm 2: Let Σ = σ init I be the covariance matrix of the CEM algorithm 3: Initialize the critic Q π and the target critic Q π t 4: Initialize an empty cyclic replay buffer R 5: total steps, actor steps = 0, 0 6: while total steps < max steps:7:Draw the current population pop from N (π µ, Σ) with importance mixing (see Algorithm 2 in Appendix B)8:for i ← 1 to pop size/2: DISPLAYFORM0 Set the current policy π to pop [i] 10:Initialize a target actor π t with the weights of π 11:Train Q π for 2 * actor steps / pop size mini-batches 12:Train π for actor steps mini-batches 13:Reintroduce the weights of π in pop 14:actor steps = 0 for i ← 1 to pop size:16:Set the current policy π to pop [i] 17:(fitness f, steps s) ← evaluate(π)18:Fill R with the collected experiences 19:actor steps = actor steps + s total steps = total steps + actor steps 20:Update π µ and Σ with the top half of the population (see and in Section 3.2) 21: end while• How does CEM-RL perform compared to ERL? What are the main factors explaining the difference between both algorithms?Additionally, in Appendices B to E, we investigate other aspects of the performance of CEM-RL such as the impact of importance mixing, the addition of action noise or the use of the tanh non-linearity. In order to investigate the above questions, we evaluate the corresponding algorithms in several continuous control tasks simulated with the MUJOCO physics engine and commonly used as policy search benchmarks: HALF-CHEETAH-V2, HOPPER-V2, WALKER2D-V2, SWIMMER-V2 and ANT-V2 BID2.We implemented CEM-RL with the PYTORCH library 1. We built our code around the DDPG and TD3 implementations given by the authors of the TD3 algorithm 2. For the ERL implementation, we used one given by the authors 3.Unless specified otherwise, each curve represents the average over 10 runs of the corresponding quantity, and the variance corresponds to the 68% confidence interval for the estimation of the mean. In all learning performance figures, dotted curves represent medians and the x-axis represents the total number of steps actually performed in the environment, to highlight potential sample efficiency effects, particularly when using importance mixing (see Appendix B).Architectures of the networks are described in Appendix A. Most TD3 and DDPG hyper-parameters were reused from BID8. The only notable difference is the use of tanh non linearities instead of RELU in the actor network, after we spotted that the latter performs better than the former on several environments. We trained the networks with the Adam optimizer BID15, with a learning rate of 1e −3 for both the actor and the critic. The discount rate γ was set to 0.99, and the target weight τ to 5e −3. All populations contained 10 actors, and the standard deviations σ init, σ end and the constant τ cem of the CEM algorithm were respectively set to 1e −3, 1e −5 and 0.95. Finally, the size of the replay buffer was set to 1e 6, and the batch size to 100. We first compare CEM-TD3 to TD3, TD3 and a multi-actor variant of TD3, then CEM-RL to ERL based on several benchmarks. A third section is devoted to additional which have been rejected in appendices to comply with space constraints. In this section, we compare CEM-TD3 to three baselines: our variant of CEM, TD3 and a multi-actor variant of TD3. For TD3 and its multi-actor variant, we report the average of the score of the agent over 10 episodes for every 5000 steps performed in the environment. For CEM and CEM-TD3, we report after each generation the average of the score of the new average individual of the population over 10 episodes. From FIG1, one can see that CEM-TD3 outperforms CEM and TD3 on HALF-CHEETAH-V2, HOPPER-V2 and WALKER2D-V2. On most benchmarks, CEM-TD3 also displays slightly less variance than TD3. Further in Appendix G show that on ANT-V2, CEM-TD3 outperforms CEM and is on par with TD3. More surprisingly, CEM outperforms all other algorithms on SWIMMER-V2, as covered in Appendix E.One may wonder whether the good performance of CEM-TD3 mostly comes from its "ensemble method" nature BID19. Indeed, having a population of actors improves exploration and stabilizes performances by filtering out instabilities that can appear during learning. To answer this question, we performed an ablative study where we removed the CEM mechanism. We considered a population of 5 actors initialized as in CEM-TD3, but then just following the gradient given by the TD3 critic. This algorithm can be seen as a multi-actor TD3 where all actors share the same critic. We reused the hyper-parameters described in Section 5.2. From FIG1, one can see that CEM-TD3 outperforms more or less significantly multi-actor TD3 on all benchmarks, which clearly suggests that the evolutionary part contributes to the performance of CEM-TD3.As a summary, In this section, we compare CEM-RL to ERL. The ERL method using DDPG rather than TD3, we compare it to both CEM-DDPG and CEM-TD3. This makes it possible to isolate the effect of the combination scheme from the improvement brought by TD3 itself. Results are shown in FIG2. We let ERL learn for the same number of steps as in Khadka & Tumer, namely 2 millions on HALF-CHEETAH-V2 and SWIMMER-V2, 4 millions on HOPPER-V2, 6 millions on ANT-V2 and 10 millions on WALKER2D-V2. However, due to limited computational resources, we stop learning with both CEM-RL methods after 1 million steps, hence the constant performance after 1 million steps. Our slightly differ from those of the ERL paper BID14. We explain this difference by two factors. First, the authors only average their over 5 different seeds, whereas we used 10 seeds. Second, the released implementation of ERL may be slightly different from the one used to produce the published 4, raising again the reproducibility issue recently discussed in the reinforcement learning literature BID11. FIG2 shows that after performing 1 million steps, both CEM-RL methods outperform ERL on HALF-CHEETAH-V2, HOPPER-V2 and WALKER2D-V2. We can also see that CEM-TD3 outperforms CEM-DDPG on WALKER2D-V2. On ANT-V2, CEM-DDPG and ERL being on par after 1 million steps, we increased the number of learning steps in CEM-DDPG to 6 millions. The corresponding are shown in FIG0 in Appendix G. Results on SWIMMER-V2 are covered in Appendix E. Table 2: Final performance of ERL, CEM-DDPG and CEM-TD3 on 5 environments. We report the mean ands medians over 10 runs of 1 million steps. For each benchmark, we highlight the of the method with the best mean. One can see that, beyond outperforming ERL, CEM-TD3 outperforms CEM-DDPG on most benchmarks, in terms of final performance, convergence speed, and learning stability. This is especially true for hard environments such as WALKER2D-V2 and ANT-V2. The only exception in SWIMMER-V2, as studied in Appendix E. Table 2 gives the final best of methods used in this Section. The overall is that CEM-RL generally outperforms ERL. In this section, we outline the main messages arising from further studies that have been rejected in appendices in order to comply with space constraints. In Appendix B, we investigate the influence of the importance mixing mechanism over the evolution of performance, for CEM and CEM-RL. Results show that importance mixing has a limited impact on the sample efficiency of CEM-TD3 on the benchmarks studied here, in contradiction with from BID22 obtained using various standard evolutionary strategies. The fact that the covariance matrix Σ moves faster with CEM-RL may explain this , as it prevents the reuse of samples. In Appendix C, we analyze the effect of adding Gaussian noise to the actions of CEM-TD3. Unlike what BID14 suggested using ERL, we did not find any conclusive evidence that action space noise improves performance with CEM-TD3. This may be due to the fact that, as further studied in Appendix D, the evolutionary algorithm in ERL tends to converge to a unique individual, hence additional noise is welcome, whereas evolutionary strategies like CEM more easily maintain some exploration. Indeed, we further investigate the different dynamics of parameter space exploration provided by the ERL and CEM-TD3 algorithms in Appendix D. FIG6 and 7 show that the evolutionary population in ERL tends to collapse towards a single individual, which does not happen with the CEM population due to the sampling method. In Appendix E, we highlight the fact that, on the SWIMMER-V2 benchmark, the performance of the algorithms studied in this paper varies a lot from the performance obtained on other benchmarks. The most likely explanation is that, in SWIMMER-V2, any deep RL method provides a deceptive gradient information which is detrimental to convergence towards efficient actor parameters. In this particular context, ERL better resists to detrimental gradients than CEM-RL, which suggests to design a version of ERL using CEM to improve the population instead of its ad hoc evolutionary algorithm. Finally, in Appendix F, we show that using a tanh non-linearity in the architecture of actors often in significantly stronger performance than using RELU. This strongly suggests performing "neural architecture search" (; BID7 in the context of RL. We advocated in this paper for combining evolutionary and deep RL methods rather than opposing them. In particular, we have proposed such a combination, the CEM-RL method, and showed that in most cases it was outperforming not only some evolution strategies and some sample efficient offpolicy deep RL algorithms, but also another combination, the ERL algorithm. Importantly, despite being mainly an evolutionary method, CEM-RL is competitive to the state-of-the-art even when considering sample efficiency, which is not the case of other deep neuroevolution methods BID24 .Beyond these positive performance , our study raises more fundamental questions. First, why does the simple CEM algorithm perform so well on the SWIMMER-V2 benchmark? Then, our empirical study of importance mixing did not confirm a clear benefit of using it, neither did the effect of adding noise on actions. We suggest explanations for these phenomena, but nailing down the fundamental reasons behind them will require further investigations. Such deeper studies will also help understand which properties are critical in the performance and sample efficiency of policy search algorithms, and define even more efficient policy search algorithms in the future. As suggested in Section 5.2.3, another avenue for future work will consist in designing an ERL algorithm based on CEM rather than on an ad hoc evolutionary algorithm. Finally, given the impact of the neural architecture on our , we believe that a more systemic search of architectures through techniques such as neural architecture search (; BID7 may provide important progress in performance of deep policy search algorithms. This work was supported by the European Commission, within the DREAM project, and has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N o 640891. We would like to thank Thomas Pierrot for fruitful discussions. A ARCHITECTURE OF THE NETWORKS Our network architectures are very similar to the ones described in BID8 . In particular, the size of the layers remains the same. The only differences resides in the non-linearities. Importance mixing is a specific mechanism designed to improve the sample efficiency of evolution strategies. It was initially introduced in Sun et al. FORMULA1 and consisted in reusing some samples from the previous generation into the current one, to avoid the cost of re-evaluating the corresponding policies in the environment. The mechanism was recently extended in BID22 to reusing samples from any generation stored into an archive. Empirical showed that importance sampling can improve sample efficiency by a factor of ten, and that most of these savings just come from using the samples from the previous generation, as performed by the initial mechanism. A pseudo-code of the importance mixing mechanism is given in Algorithm 2.In CEM, importance mixing is implemented as described in BID22 . By contrast, some adaptation is required in CEM-RL. Actors which take gradient steps can no longer be regarded as sampled from the current distribution of the CEM algorithm. We thus choose to apply importance mixing only to the half of the population which does not receive gradient steps from the RL critic. In practice, only actors which do not take gradient steps are inserted into the actor archive and can be replaced with samples from previous generations. From Figure 4, one can see that in the CEM case, importance mixing introduces some minor instability, without noticeably increasing sample efficiency. On HALF-CHEETAH-V2, SWIMMER-V2 and WALKER2D-V2, performance even decreases when using importance mixing. For CEM-RL, the effect varies greatly from environment to environment, but the gain in sample reuse is almost null if min(1, DISPLAYFORM0 Append z i to the current generation g new 7:Draw z i according to the current pdf p(., θ new)8: DISPLAYFORM1 Append z i to the current generation g new 10:size = |g new | if size ≥ N: go to 1212: if size > N: remove a randomly chosen sample 13: if size < N: fill the generation sampling from p(., DISPLAYFORM0 Figure 4: Learning curves of CEM-TD3 and CEM with and without importance mixing on the HALF-CHEETAH-V2, HOPPER-V2, WALKER2D-V2, SWIMMER-V2 and ANT-V2 benchmarks.as well, though an increase in performance can be seen on SWIMMER-V2. The latter fact is consistent with the finding that the gradient steps are not useful in this environment (see Appendix E). On HOPPER-V2 and HALF-CHEETAH-V2, with and without importance mixing seem to be equivalent. On WALKER2D-V2, importance mixing decreases final performance. On ANT-V2, importance mixing seems to accelerate learning in the beginning, but final performances are equivalent to those of CEM-RL. Thus importance mixing seems to have a limited impact in CEM-TD3.This seems to contradict the obtained in BID22. This may be due to different things. First, the dimensions of the search spaces in the experiments here are much larger than those studied in BID22, which might deteriorate the estimation of the covariance matrices when samples are too correlated. the ones used in BID22. In particular, we can see from FIG1 that CEM is far from solving the environments over one million steps. Perhaps a study over a longer time period would make importance mixing relevant again. Besides, by reusing old samples, the importance mixing mechanism somehow hinders exploration (since we evaluate less new individuals), which might be detrimental in the case of MUJOCO environments. Finally, and most importantly, the use of RL gradient steps accelerates the displacement of the covariance matrix, ing in fewer opportunities for sample reuse. In BID14, the authors indicate that one reason for the efficiency of their approach is that the replay buffer of DDPG gets filled with two types of noisy experiences. On one hand, the buffer gets filled with noisy interactions of the DDPG actor with the environment. This is usually referred to as action space noise. On the other hand, actors with different parameters also fill the buffer, which is more similar to parameter space noise. In CEM-RL, we only use parameter space noise, but it would also be possible to add action space noise. To explore this direction, each actor taking gradient steps performs a noisy episode in the environment. We report final after 1 million steps in TAB9. Learning curves are available in FIG5. Unlike what BID14 suggested, we did not find any conclusive evidence that action space noise improves performance. In CEM-TD3, the CEM part seems to explore enough of the action space on its own. It seems that sampling performed in CEM in sufficient exploration and performs better than adding simple Gaussian noise to the actions. This highlights a difference between using an evolutionary strategy like CEM and an evolutionary algorithm as done in ERL. Evolutionary algorithms tend to converge to a unique individual whereas evolutionary strategies more easily maintain some exploration. These aspects are further studied in Appendix D. In this section, we highlight the difference in policy parameter update dynamics in CEM-RL and ERL. FIG6 displays the evolution of the first two parameters of actor networks during training with CEM-RL and ERL on HALF-CHEETAH-V2. For ERL, we plot the chosen parameters of the DDPG actor with a continuous line, and represent those of the evolutionary actors with dots. For CEM-RL, we represent the chosen parameters of sampled actors with dots, and the gradient steps based on the TD3 critic with continuous lines. The same number of dots is represented for both algorithms. One can see that, in ERL the evolutionary population tends to be much less diverse that in CEM-RL. There are many redundancies in the parameters (dots with the same coordinates), and the population seems to converge to a single individual. On the other hand, there is no such behavior in CEM-RL where each generation introduces completely new samples. As a consequence, parameter space exploration looks better in the CEM-RL algorithm. To further study this loss of intra-population diversity in ERL, we perform 10 ERL runs and report in FIG7 an histogram displaying the distribution of the population-wise similarity with respect to the populations encountered during learning. We measure this similarity as the average percentage of parameters shared between two different individuals of the said population. The indicate that around 55% of populations encountered during a run of ERL display a population-similarity of above 80%. Results are averaged over 10 runs. As usual, the variance corresponds to the 68% confidence interval for the estimation of the mean. One can also see the difference in how both methods use the gradient information of their respective deep RL part. In the case of ERL, the parameters of the population concentrate around those of the DDPG actor. Each 10 generations, its parameters are introduced into the population, and since DDPG is already efficient alone on HALF-CHEETAH-V2, those parameters quickly spread into the population. Indeed, according to BID14, the ing DDPG actor is the elite of the population 80% of the time, and is introduced into the population 98% of the time. This integration is however passive: the direction of exploration does not vary much after introducing the DDPG agent. CEM-RL integrates this gradient information differently. The short lines emerging from dots, which represent gradient steps performed by half of the actors, act as scouts. Once CEM becomes aware of better solutions that can be found in a given direction, the sampling of the next population is modified so as to favor this promising direction. CEM is thus pro-actively exploring in the good directions it has been fed with. Experiments on the SWIMMER-V2 benchmark give that differ a lot from the on other benchmarks, hence they are covered separately here. Figure8a shows that CEM outperforms TD3, CEM-TD3, multi-actor TD3. Besides, as shown in FIG8, ERL outperforms CEM-DDPG, which itself outperforms CEM-TD3. All these findings seem to show that being better at RL makes you worse at SWIMMER-V2. The most likely explanation is that, in SWIMMER-V2, any deep RL method provides a deceptive gradient information which is detrimental to convergence towards efficient actor parameters. This could already be established from the of BID14, where the evolution algorithm alone produced on par with the ERL algorithm, showing that RL-based actors were just ignored. In this particular context, the actors using TD3 gradient being deteriorated by the deceptive gradient effect, CEM-RL is behaving as a CEM with only half a population, thus it is less efficient than the standard CEM algorithm. By contrast, ERL better resists than CEM-RL to the same issue. Indeed, if the actor generated by DDPG does not perform better than the evolutionary population, then this actor is just ignored, and the evolutionary part behaves as usual, without any loss in performance. In practice, Khadka & Tumer note that on SWIMMER-V2, the DDPG actor was rejected 76% of the time. Finally, by comparing CEM and ERL from FIG8 and FIG8, one can conclude that on this benchmark, the evolutionary part of ERL behaves on par with CEM alone. This is at odds with premature convergence effects seen in the evolutionary part of ERL, as studied in more details in Appendix D. From all these insights, the SWIMMER-V2 environment appears particularly interesting, as we are not aware of any deep RL method capable of solving it quickly and reliably. In this section, we explore the impact on performance of the type of non-linearities used in the actor of CEM-TD3. TAB9 reports the of CEM-TD3 using RELU non-linearities between the linear layers, instead of tanh. Figure 9 displays the learning performance of CEM-TD3 and CEM on benchmarks, using either the RELU or the tanh nonlinearity in the actors. Results indicate that on some benchmarks, changing from tanh to RELU can cause a huge drop in performance. This is particularly obvious in the ANT-V2 benchmark, where the average performance drops by 46%. FIG9 (f) shows that, for the CEM algorithm on the SWIMMER-V2 benchmark, using RELU also causes a 60% performance drop. As previously reported in the literature BID11, this study suggests that network architectures can have a large impact on performance. G ADDITIONAL ON ANT-V2 FIG0 represents the learning performance of CEM, TD3, multi-actor TD3, CEM-DDPG and CEM-TD3 on the ANT-V2 benchmark. It is discussed in the main text.
We propose a new combination of evolution strategy and deep reinforcement learning which takes the best of both worlds
934
scitldr
Recent advances in deep reinforcement learning have made significant strides in performance on applications such as Go and Atari games. However, developing practical methods to balance exploration and exploitation in complex domains remains largely unsolved. Thompson Sampling and its extension to reinforcement learning provide an elegant approach to exploration that only requires access to posterior samples of the model. At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical. Thus, it is attractive to consider approximate Bayesian neural networks in a Thompson Sampling framework. To understand the impact of using an approximate posterior on Thompson Sampling, we benchmark well-established and recently developed methods for approximate posterior sampling combined with Thompson Sampling over a series of contextual bandit problems. We found that many approaches that have been successful in the supervised learning setting underperformed in the sequential decision-making scenario. In particular, we highlight the challenge of adapting slowly converging uncertainty estimates to the online setting. Recent advances in reinforcement learning have sparked renewed interest in sequential decision making with deep neural networks. Neural networks have proven to be powerful and flexible function approximators, allowing one to learn mappings directly from complex states (e.g., pixels) to estimates of expected return. While such models can be accurate on data they have been trained on, quantifying model uncertainty on new data remains challenging. However, having an understanding of what is not yet known or well understood is critical to some central tasks of machine intelligence, such as effective exploration for decision making. A fundamental aspect of sequential decision making is the exploration-exploitation dilemma: in order to maximize cumulative reward, agents need to trade-off what is expected to be best at the moment, (i.e., exploitation), with potentially sub-optimal exploratory actions. Solving this trade-off in an efficient manner to maximize cumulative reward is a significant challenge as it requires uncertainty estimates. Furthermore, exploratory actions should be coordinated throughout the entire decision making process, known as deep exploration, rather than performed independently at each state. Thompson Sampling and its extension to reinforcement learning, known as Posterior Sampling, provide an elegant approach that tackles the exploration-exploitation dilemma by maintaining a posterior over models and choosing actions in proportion to the probability that they are optimal. Unfortunately, maintaining such a posterior is intractable for all but the simplest models. As such, significant effort has been dedicated to approximate Bayesian methods for deep neural networks. These range from variational methods BID17 BID6 BID23 to stochastic minibatch Markov Chain Monte Carlo (; ; BID25 BID1 BID26, among others. Because the exact posterior is intractable, evaluating these approaches is hard. Furthermore, these methods are rarely compared on benchmarks that measure the quality of their estimates of uncertainty for downstream tasks. To address this challenge, we develop a benchmark for exploration methods using deep neural networks. We compare a variety of well-established and recent Bayesian approximations under the lens of Thompson Sampling for contextual bandits, a classical task in sequential decision making. All code and implementations to reproduce the experiments will be available open-source, to provide a reproducible benchmark for future development. 1 Exploration in the context of reinforcement learning is a highly active area of research. Simple strategies such as epsilon-greedy remain extremely competitive . However, a number of promising techniques have recently emerged that encourage exploration though carefully adding random noise to the parameters (; BID12 BID13 or bootstrap sampling before making decisions. These methods rely explicitly or implicitly on posterior sampling for exploration. In this paper, we investigate how different posterior approximations affect the performance of Thompson Sampling from an empirical standpoint. For simplicity, we restrict ourselves to one of the most basic sequential decision making scenarios: that of contextual bandits. No single algorithm bested the others in every bandit problem, however, we observed some general trends. We found that dropout, injecting random noise, and bootstrapping did provide a strong boost in performance on some tasks, but was not able to solve challenging synthetic exploration tasks. Other algorithms, like Variational Inference, Black Box α-divergence, and minibatch Markov Chain Monte Carlo approaches, strongly couple their complex representation and uncertainty estimates. This proves problematic when decisions are made based on partial optimization of both, as online scenarios usually require. On the other hand, making decisions according to a Bayesian linear regression on the representation provided by the last layer of a deep network offers a robust and easy-to-tune approach. It would be interesting to try this approach on more complex reinforcement learning domains. In Section 2 we discuss Thompson Sampling, and present the contextual bandit problem. The different algorithmic approaches that approximate the posterior distribution fed to Thompson Sampling are introduced in Section 3, while the linear case is described in Section 4. The main experimental are presented in Section 5, and discussed in Section 6. Finally, Section 7 concludes. The contextual bandit problem works as follows. At time t = 1,..., n a new context X t ∈ R d arrives and is presented to algorithm A. The algorithm -based on its internal model and X t -selects one of the k available actions, a t. Some reward r t = r t (X t, a t) is then generated and returned to the algorithm, that may update its internal model with the new data. At the end of the process, the reward for the algorithm is given by r = n t=1 r t, and cumulative regret is defined as R A = E[r * − r], where r * is the cumulative reward of the optimal policy (i.e., the policy that always selects the action with highest expected reward given the context). The goal is to minimize R A.The main research question we address in this paper is how approximated model posteriors affect the performance of decision making via Thompson Sampling (Algorithm 1) in contextual bandits. We study a variety of algorithmic approaches to approximate a posterior distribution, together with different empirical and synthetic data problems that highlight several aspects of decision making. We consider distributions π over the space of parameters that completely define a problem instance θ ∈ Θ. For example, θ could encode the reward distributions of a set of arms in the multi-armed bandit scenario, or -more generally-all the parameters of an MDP in reinforcement learning. Thompson Sampling is a classic algorithm which requires only that one can sample from the posterior distribution over plausible problem instances (for example, values or rewards). At each round, it draws a sample and takes a greedy action under the optimal policy for the sample. The posterior distribution is then updated after the of the action is observed. Thompson Sampling has been shown to be extremely effective for bandit problems both in practice BID9 BID16 and theory BID0. It is especially appealing for deep neural networks as one rarely has access to the full posterior but can often approximately sample from it. In this section, we describe the different algorithmic design principles that we considered in our simulations of Section 5. These algorithms include linear methods, Neural Linear and Neural Greedy, variational inference, expectation-propagation, dropout, Monte Carlo methods, bootstrapping, direct noise injection, and Gaussian Processes. In FIG10 in the appendix, we visualize the posteriors of the nonlinear algorithms on a synthetic one dimensional problem. Linear Methods We apply well-known closed-form updates for Bayesian linear regression for exact posterior inference in linear models BID5. We provide the specific formulas below, and note that they admit a computationally-efficient online version. We consider exact linear posteriors as a baseline; i.e., these formulas compute the posterior when the data was generated according to Y = X T β + where ∼ N (0, σ 2), and Y represents the reward. Importantly, we model the joint distribution of β and σ 2 for each action. Sequentially estimating the noise level σ 2 for each action allows the algorithm to adaptively improve its understanding of the volume of the hyperellipsoid of plausible β's; in general, this leads to a more aggressive initial exploration phase (in both β and σ 2).The posterior at time t for action i, after observing X, Y, is DISPLAYFORM0, where we assume σ 2 ∼ IG(a t, b t), and β | σ 2 ∼ N (µ t, σ 2 Σ t), an Inverse Gamma and Gaussian distribution, respectively. Their parameters are given by DISPLAYFORM1 DISPLAYFORM2 We set the prior hyperparameters to µ 0 = 0, and Λ 0 = λ Id, while a 0 = b 0 = η > 1. It follows that initially, for σ We consider two approximations to motivated by function approximators where d is large. While posterior distributions or confidence ellipsoids should capture dependencies across parameters as shown above (say, a dense Σ t), in practice, computing the correlations across all pairs of parameters is too expensive, and diagonal covariance approximations are common. For linear models it may still be feasible to exactly compute, whereas in the case of Bayesian neural networks, unfortunately, this may no longer be possible. Accordingly, we study two linear approximations where Σ t is diagonal. Our goal is to understand the impact of such approximations in the simplest case, to properly set our expectations for the loss in performance of equivalent approximations in more complex approaches, like mean-field variational inference or Stochastic Gradient Langevin Dynamics. Assume for simplicity the noise standard deviation is known. In FIG1, for d = 2, we see the posterior distribution β t ∼ N (µ t, Σ t) of a linear model based on, in green, together with two diagonal approximations. Each approximation tries to minimize a different objective. In blue, the PrecisionDiag posterior approximation finds the diagonalΣ ∈ R d×d minimizing KL(N (µ t,Σ) || N (µ t, Σ t)), like in mean-field variational inference. In particular,Σ = Diag(Σ −1 t) −1. On the other hand, in orange, the Diag posterior approximation finds the diagonal matrixΣ minimizing KL(N (µ t, Σ t) || N (µ t,Σ)) instead. In this case, the solution is simplyΣ = Diag(Σ t).We add linear baselines that do not model the uncertainty in the action noise σ 2. In addition, we also consider simple greedy and epsilon greedy linear baselines (i.e., not based on Thompson Sampling). The main problem linear algorithms face is their lack of representational power, which they complement with accurate uncertainty estimates. A natural attempt at getting the best of both worlds consists in performing a Bayesian linear regression on top of the representation of the last layer of a neural network, similarly to. The predicted value v i for each action a i is given by v i = β T i z x, where z x is the output of the last hidden layer of the network for context x. While linear methods directly try to regress values v on x, we can independently train a deep net to learn a representation z, and then use a Bayesian linear regression to regress v on z, obtain uncertainty estimates on the β's, and make decisions accordingly via Thompson Sampling. Note that we do not explicitly consider the weights of the linear output layer of the network to make decisions; further, the network is only used to find good representations z. In addition, we can update the network and the linear regression at different time-scales. It makes sense to keep an exact linear regression (as in FORMULA1 and FORMULA2) at all times, adding each new data point as soon as it arrives. However, we only update the network after a number of points have been collected. In our experiments, after updating the network, we perform a forward pass on all the training data to obtain z x, which is then fed to the Bayesian regression. In practice this may be too expensive, and z could be updated periodically with online updates on the regression. We call this algorithm Neural Linear. Neural Greedy We refer to the algorithm that simply trains a neural network and acts greedily (i.e., takes the action whose predicted score for the current context is highest) as RMS, as we train it using the RMSProp optimizer. This is our non-linear baseline, and we tested several versions of it (based on whether the training step was decayed, reset to its initial value for each re-training or not, and how long the network was trained for). We also tried the -greedy version of the algorithm, where a random action was selected with probability for some decaying schedule of.Variational Inference Variational approaches approximate the posterior by finding a distribution within a tractable family that minimizes the KL divergence to the posterior BID20. These approaches formulate and solve an optimization problem, as opposed, for example, to sampling methods like MCMC BID22 ). Typically (and in our experiments), the posterior is approximated by a mean-field or factorized distribution where strong independence assumptions are made. For instance, each neural network weight can be modeled via a -conditionally independent-Gaussian distribution whose mean and variance are estimated from data. Recent advances have scaled these approaches to estimate the posterior of neural networks with millions of parameters BID6. A common criticism of variational inference is that it underestimates uncertainty (e.g., BID5), which could lead to under-exploration. The family of expectation-propagation algorithms (; BID30 a) is based on the message passing framework . They iteratively approximate the posterior by updating a single approximation factor (or site) at a time, which usually corresponds to the likelihood of one data point. The algorithm sequentially minimizes a set of local KL divergences, one for each site. Most often, and for computational reasons, likelihoods are chosen to lie in the exponential family. In this case, the minimization corresponds to moment matching. See BID14 for further details. We focus on methods that directly optimize the global EP objective via stochastic gradient descent, as, for instance, Power EP BID28. In particular, in this work, we implement the black-box α-divergence minimization algorithm BID19, where local parameter sharing is applied to the Power EP energy function. Note that different values of α ∈ R\{0} correspond to common algorithms: α = 1 to EP, and α → 0 to Variational Bayes. The optimal α value is problem-dependent BID19.Dropout Dropout is a training technique where the output of each neuron is independently zeroed out with probability p at each forward pass . Once the network has been trained, dropout can still be used to obtain a distribution of predictions for a specific input. Following the best action with respect to the random dropout prediction can be interpreted as an implicit form of Thompson sampling. Dropout can be seen as optimizing a variational objective BID23 BID13 BID21.Monte Carlo Monte Carlo sampling remains one of the simplest and reliable tools in the Bayesian toolbox. Rather than parameterizing the full posterior, Monte Carlo methods estimate the posterior through drawing samples. This is naturally appealing for highly parameterized deep neural networks for which the posterior is intractable in general and even simple approximations such as multivariate Gaussian are too expensive (i.e. require computing and inverting a covariance matrix over all parameters). Among Monte Carlo methods, Hamiltonian Monte Carlo (HMC) is often regarded as a gold standard algorithm for neural networks as it takes advantage of gradient information and momentum to more effectively draw samples. However, it remains unfeasible for larger datasets as it involves a Metropolis accept-reject step that requires computing the log likelihood over the whole data set. A variety of methods have been developed to approximate HMC using mini-batch stochastic gradients. These Stochastic Gradient Langevin Dynamics (SGLD) methods add Gaussian noise to the model gradients during stochastic gradient updates in such a manner that each update in an approximate sample from the posterior. Different strategies have been developed for augmenting the gradients and noise according to a preconditioning matrix. BID25 show that a preconditioner based on the RMSprop algorithm performs well on deep neural networks. suggested using the Fisher information matrix as a preconditioner in SGLD. Unfortunately the approximations of SGLD hold only if the learning rate is asymptotically annealed to zero. BID1 introduced Stochastic Gradient Fisher Scoring to elegantly remove this requirement by preconditioning according to the Fisher information (or a diagonal approximation thereof). BID26 develop methods for approximately sampling from the posterior using a constant learning rate in stochastic gradient descent and develop a prescription for a stable version of SGFS. We evaluate the diagonal-SGFS and constant-SGD algorithms from BID26 in this work. Specifically for constant-SGD we use a constant learning rate for stochastic gradient descent, where the learning rate is given by = 2 S N BB T where S is the batch size, N the number of data points and BB T is an online average of the diagonal empirical Fisher information matrix. For Stochastic Gradient Fisher Scoring we use the following stochastic gradient update for the model parameters θ at step t: DISPLAYFORM0 where we take the noise covariance EE T to also be BB DISPLAYFORM1 Bootstrap A simple empirical approach to approximate the sampling distribution of any estimator is the Bootstrap BID10. The main idea is to simultaneously train q models, where each model i is based on a different dataset D i. When all the data D is available in advance, D i is typically created by sampling |D| elements from D at random with replacement. In our case, however, the data grows one example at a time. Accordingly, we set a parameter p ∈, and append the new datapoint to each D i independently at random with probability p. In order to emulate Thompson Sampling, we sample a model uniformly at random (i.e., with probability p i = 1/q.) and take the action predicted to be best by the sampled model. We mainly tested cases q = 5, 10 and p = 0.8, 1.0, with neural network models. Note that even when p = 1 and the datasets are identical, the random initialization of each network, together with the randomness from SGD, lead to different predictions. Direct Noise Injection Parameter-Noise is a recently proposed approach for exploration in deep RL that has shown promising . The training updates for the network are unchanged, but when selecting actions, the network weights θ are perturbed with isotropic Gaussian noise. Crucially, the network uses layer normalization BID3, which ensures that all weights are on the same scale. The magnitude of the Gaussian noise is adjusted so that the overall effect of the perturbations is similar in scale to -greedy with a linearly decaying schedule (see for details). Because the perturbations are done on the model parameters, we might hope that the actions produced by the perturbations are more sensible than -greedy. Bayesian Non-parametric Gaussian processes are a gold-standard method for modeling distributions over non-linear continuous functions. It can be shown that, in the limit of infinite hidden units and under a Gaussian prior, a Bayesian neural network converges to a Gaussian process . As such, GPs would appear to be a natural baseline. Unfortunately, standard GPs computationally scale cubically in the number of observations, limiting their applicability to relatively small datasets. There are a wide variety of methods to approximate Gaussian processes using, for example, pseudo-observations or variational inference . We implemented both standard and sparse GPs but only report the former due to similar performance. For the standard GP, due to the scaling issue, we stop adding inputs to the GP after 1000 observations. This performed significantly better than randomly sampling inputs. Our implementation is a multi-task Gaussian process BID7 with a linear and Matern 3/2 product kernel over the inputs and an exponentiated quadratic kernel over latent vectors for the different tasks. The hyperparameters of this model and the latent task vectors are optimized over the GP marginal likelihood. This allows the model to learn correlations between the outputs of the model. Specifically, the covariance function K(·) of the GP is given by: DISPLAYFORM2 and the task kernel between tasks t and l are DISPLAYFORM3 2 ) where v l indexes the latent vector for task l and r λ (x,x) = |(x λ) − (x λ)|. The length-scales, λ m and λ l, and amplitude parameters α, β are optimized via the log marginal likelihood. For the sparse version we used a Sparse Variational GP BID18 with the same kernel and with 300 inducing points, trained via minibatch stochastic gradient descent. In this section, we illustrate some of the subtleties that arise when uncertainty estimates drive sequential decision-making using simple linear examples. There is a fundamental difference between static and dynamic scenarios. In a static scenario, e.g. supervised learning, we are given a model family Θ (like the set of linear models, trees, or neural networks with specific dimensions), a prior distribution π 0 over Θ, and some observed data D that -importantly-is assumed i.i.d. Our goal is to return an approximate posterior distribution: DISPLAYFORM0 We define the quality of our approximation by means of some distance d(π, π).On the other hand, in dynamic settings, our estimate at time t, sayπ t, will be used via some mechanism M, in this case Thompson sampling, to collect the next data-point, which is then appended to D t. In this case, the data-points in D t are no longer independent. D t will now determine two distributions: the posterior given the data that was actually observed, π t+1 = P(θ | D t), and our new estimateπ t+1. When the goal is to make good sequential decisions in terms of cumulative regret, the distance d(π t, π t) is in general no longer a definitive proxy for performance. For instance, a poorly-approximated decision boundary could lead an algorithm, based onπ, to get stuck repeatedly selecting a single sub-optimal action a. After collecting lots of data for that action,π t and π t could start to agree (to their capacity) on the models that explain what was observed for a, while both would stick to something close to the prior regarding the other actions. At that point, d(π t, π t) may show relatively little disagreement, but the regret would already be terrible. Let π * t be the posterior distribution P(θ | D t) under Thompson Sampling's assumption, that is, data was always collected according to π * j for j < t. We follow the idea thatπ t being close to π * t for all t leads to strong performance. However, this concept is difficult to formalize: once different decisions are made, data for different actions is collected and it is hard to compare posterior distributions. We illustrate the previous points with a simple example, see FIG2. Data is generated according to a bandit with k = 6 arms. For a given context X ∼ N (µ, Σ), the reward obtained by pulling arm i follows a linear model r i,X = X T β i + with ∼ N (0, σ DISPLAYFORM1 can be exactly computed using the standard Bayesian linear regression formulas presented in Section 3. We set the contextual dimension d = 20, and the prior to be β ∼ N (0, λ I d), for λ > 0.In FIG2, we show the posterior distribution for two dimensions of β i for each arm i after n = 500 pulls. In particular, in FIG2, two independent runs of Thompson Sampling with their posterior distribution are displayed in red and green. While strongly aligned, the estimates for some arms disagree (especially for arms that are best only for a small fraction of the contexts, like Arm 2 and 3, where fewer data-points are available). In FIG2, we also consider Thompson Sampling with an approximate posterior with diagonal covariance matrix, Diag in red, as defined in Section 3. Each algorithm collects its own data based on its current posterior (or approximation). In this case, the posterior disagreement after n = 500 decisions is certainly stronger. However, as shown in FIG2, if we computed the approximate posterior with a diagonal covariance matrix based on the data collected by the actual posterior, the disagreement would be reduced as much as possible within the approximation capacity (i.e., it still cannot capture correlations in this case). FIG2 shows then the effect of the feedback loop. We look next at the impact that this mismatch has on regret. We illustrate with a similar example how inaccurate posteriors sometimes lead to quite different behaviors in terms of regret. In FIG1, we see the posterior distribution β ∼ N (µ, Σ) of a linear model in green, together with the two diagonal linear approximations introduced in Section 3: the Diag (in orange) and the PrecisionDiag (in blue) approximations, respectively. We now assume there are k linear arms, β i ∈ R d for i = 1,..., k, and decisions are made according to the posteriors in FIG1. In FIG1 we plot the regret of Thompson Sampling when there are k = 20 arms, for both d = 15 and d = 30. We see that, while the PrecisionDiag approximation does even outperform the actual posterior, the diagonal covariance approximation truly suffers poor regret when we increase the dimension d, as it is heavily penalized by simultaneously over-exploring in a large number of dimensions and repeateadly acting according to implausible models. In this section, we present the simulations and outcomes of several synthetic and real-world data bandit problems with each of the algorithms introduced in Section 3. In particular, we first explain how the simulations were set up and run, and the metrics we report. We then split the experiments according to how data was generated, and the underlying models fit by the algorithms from Section 3. We run the contextual bandit experiments as described at the beginning of Section 2, and discuss below some implementation details of both experiments and algorithms. A detailed summary of the key parameters used for each algorithm can be found in Table 2 in the appendix. Neural Network Architectures All algorithms based on neural networks as function approximators share the same architecture. In particular, we fit a simple fully-connected feedforward network with two hidden layers with 100 units each and ReLu activations. The input of the network has dimension d (same as the contexts), and there are k outputs, one per action. Note that for each training point (X t, a t, r t) only one action was observed (and algorithms usually only take into account the loss corresponding to the prediction for the observed action).Updating Models A key question is how often and for how long models are updated. Ideally, we would like to train after each new observation and for as long as possible. However, this may limit the applicability of our algorithms in online scenarios where decisions must be made immediately. We update linear algorithms after each time-step by means of and. For neural networks, the default behavior was to train for t s = 20 or 100 mini-batches every t f = 20 timesteps. 2 The size of each mini-batch was 512. We experimented with increasing values of t s, and it proved essential for some algorithms like variational inference approaches. See the details in Table 2.Metrics We report two metrics: cumulative regret and simple regret. We approximate the latter as the mean cumulative regret in the last 500 time-steps, a proxy for the quality of the final policy (see further discussion on pure exploration settings, BID8). Cumulative regret is computed based on the best expected reward, as is standard. For most real datasets (Statlog, Covertype, Jester, Adult, Census, and Song), the rewards were deterministic, in which case, the definition of regret also corresponds to the highest realized reward (i.e., possibly leading to a hard task, which helps to understand why in some cases all regrets look linear). We reshuffle the order of the contexts, and rerun the experiment 50 times to obtain the cumulative regret distribution and report its statistics. Hyper-Parameter Tuning Deep learning methods are known to be very sensitive to the selection of a wide variety of hyperparameters, and many of the algorithms presented are no exception. Moreover, that choice is known to be highly dataset dependent. Unfortunately, in the bandits scenario, we commonly do not have access to each problem a-priori to perform tuning. For the vast majority of algorithms, we report the outcome for three versions of the algorithm defined as follows. First, we use one version where hyper-parameters take values we guessed to be reasonable a-priori. Then, we add two additional instances whose hyper-parameters were optimized on two different datasets via Bayesian Optimization. For example, in the case of Dropout, the former version is named Dropout, while the optimized versions are named Dropout-MR (using the Mushroom dataset) and Dropout-SL (using the Statlog dataset) respectively. Some algorithms truly benefit from hyper-parameter DISPLAYFORM0 Figure 3: Wheel bandits for increasing values of δ ∈. Optimal action for blue, red, green, black, and yellow regions, are actions 1, 2, 3, 4, and 5, respectively.optimization, while others do not show remarkable differences in performance; the latter are more appropriate in settings where access to the real environment for tuning is not possible in advance. Buffer After some experimentation, we decided not to use a data buffer as evidence of catastrophic forgetting was observed, and datasets are relatively small. Accordingly, all observations are sampled with equal probability to be part of a mini-batch. In addition, as is standard in bandit algorithms, each action was initially selected s = 3 times using round-robin independently of the context. We evaluated the algorithms on a range of bandit problems created from real-world data. In particular, we test on the Mushroom, Statlog, Covertype, Financial, Jester, Adult, Census, and Song datasets (see Appendix Section A for details on each dataset and bandit problem). They exhibit a broad range of properties: small and large sizes, one dominating action versus more homogeneous optimality, learnable or little signal, stochastic or deterministic rewards, etc. For space reasons, the outcome of some simulations are presented in the Appendix. The Statlog, Covertype, Adult, and Census datasets were originally tested in BID11. We summarize the final cumulative regret for Mushroom, Statlog, Covertype, Financial, and Jester datasets in TAB0. In Figure 5 at the appendix, we show a box plot of the ranks achieved by each algorithm across the suite of bandit problems (see Appendix Table 6 and 7 for the full ). As most of the algorithms from Section 3 can be implemented for any model architecture, in this subsection we use linear models as a baseline comparison across algorithms (i.e., neural networks that contain a single linear layer). This allows us to directly compare the approximate methods against methods that can compute the exact posterior. The specific hyper-parameter configurations used in the experiments are described in TAB2 in the appendix. Datasets are the same as in the previous subsection. The cumulative and simple regret are provided in appendix Tables 4 and 5. Some of the real-data problems presented above do not require significant exploration. We design an artificial problem where the need for exploration is smoothly parameterized. The wheel bandit is defined as follows (see Figure 3). Set d = 2, and δ ∈, the exploration parameter. Contexts are sampled uniformly at random in the unit circle in R 2, X ∼ U (D). There are k = 5 possible actions. The first action a 1 always offers reward r ∼ N (µ 1, σ 2), independently of the context. On the other hand, for contexts such that X ≤ δ, i.e. inside the blue circle in Figure 3, the other four actions are equally distributed and sub-optimal, with r ∼ N (µ 2, σ 2) for µ 2 < µ 1. When X > δ, we are outside the blue circle, and only one of the actions a 2,..., a 5 is optimal depending on the sign of context components X = (X 1, X 2). If X 1, X 2 > 0, action 2 is optimal. If X 1 > 0, X 2 < 0, action 3 is optimal, and so on. Non-optimal actions still deliver r ∼ N (µ 2, σ 2) in this region, except a 1 whose mean reward is always µ 1, while the optimal action provides r ∼ N (µ 3, σ 2), with µ 3 µ 1. We set µ 1 = 1.2, µ 2 = 1.0, µ 3 = 50.0, and σ = 0.01. Note that the probability of a context randomly falling in the high-reward region is 1 − δ 2 (not blue). The difficulty of the problem increases with δ, and we expect algorithms to get stuck repeatedly selecting action a 1 for large δ. The problem can be easily generalized for d > 2. Results are shown in Table 9. 100.00 ± 0.15 100.00 ± 0.03 100.00 ± 0.01 100.00 ± 1.48 100.00 ± 1.01 Overall, we found that there is significant room for improvement in uncertainty estimation for neural networks in sequential decision-making problems. First, unlike in supervised learning, sequential decision-making requires the model to be frequently updated as data is accumulated. As a , methods that converge slowly are at a disadvantage because we must truncate optimization to make the method practical for the online setting. In these cases, we found that partially optimized uncertainty estimates can lead to catastrophic decisions and poor performance. Second, and while it deserves further investigation, it seems that decoupling representation learning and uncertainty estimation improves performance. The NeuralLinear algorithm is an example of this decoupling. With such a model, the uncertainty estimates can be solved for in closed form (but may be erroneous due to the simplistic model), so there is no issue with partial optimization. We suspect that this may be the reason for the improved performance. In addition, we observed that many algorithms are sensitive to their hyperparameters, so that best configurations are problem-dependent. Finally, we found that in many cases, the inherit randomness in Stochastic Gradient Descent provided sufficient exploration. Accordingly, in some scenarios it may be hard to justify the use of complicated (and less transparent) variations of simple methods. However, Stochastic Gradient Descent is by no The suffix of the BBB legend label indicates the number of training epochs in each training step. We emphasize that in this evaluation, all algorithms use the same family of models (i.e., linear). While PrecisionDiag exactly solves the mean field problem, BBB relies on partial optimization via SGD. As the number of training epochs increases, BBB improves performance, but is always outperformed by PrecisionDiag.means always enough: in our synthetic exploration-oriented problem (the Wheel bandit) additional exploration was necessary. Next, we discuss our main findings for each class of algorithms. Linear Methods. Linear methods offer a reasonable baseline, surprisingly strong in many cases. While their representation power is certainly a limiting factor, their ability to compute informative uncertainty measures seems to payoff and balance their initial disadvantage. They do well in several datasets, and are able to react fast to unexpected or extreme rewards (maybe as single points can have a heavy impact in fitted models, and their updates are immediate, deterministic, and exact). Some datasets clearly need more complex non-linear representations, and linear methods are unable to efficiently solve those. In addition, linear methods obviously offer computational advantages, and it would be interesting to investigate how their performance degrades when a finite data buffer feeds the estimates as various real-world online applications may require (instead of all collected data).In terms of the diagonal linear approximations described in Section 3, we found that diagonalizing the precision matrix (as in mean-field Variational Inference) performs dramatically better than diagonalizing the covariance matrix. NeuralLinear. The NeuralLinear algorithm sits near a sweet spot that is worth further studying. In general it seems to improve the RMS neural network it is based on, suggesting its exploration mechanisms add concrete value. We believe its main strength is that it is able to simultaneously learn a data representation that greatly simplifies the task at hand, and to accurately quantify the uncertainty over linear models that explain the observed rewards in terms of the proposed representation. While the former process may be noisier and heavily dependent on the amount of training steps that were taken and available data, the latter always offers the exact solution to its approximate parent problem. This, together with the partial success of linear methods with poor representations, may explain its promising . In some sense, it knows what it knows. In the Wheel problem, which requires increasingly good exploration mechanisms, NeuralLinear is probably the best algorithm. Its performance is almost an order of magnitude better than any RMS algorithm (and its spinoffs, like Bootstrapped NN, Dropout, or Parameter Noise), and all greedy linear approaches. On the other hand, it is able to successfully solve problems that require non-linear representations (as Statlog or Covertype) where linear approaches fail. In addition, the algorithm is remarkably easy to tune, and robust in terms of hyper-parameter configurations. While conceptually simple, its deployment to large scale systems may involve some technical difficulties; mainly, to update the Bayesian estimates when the network is re-trained. We believe, however, standard solutions to similar problems (like running averages) could greatly mitigate these issues. In our experiments and compared to other algorithms, as shown in Table 8, NeuralLinear is fast from a computational standpoint. Variational Inference. Overall, Bayes By Backprop performed poorly, ranking in the bottom half of algorithms across datasets TAB0. To investigate if this was due to underestimating uncertainty (as variational methods are known to BID5), to the mean field approximation, or to stochastic optimization, we applied BBB to a linear model, where the mean field optimization problem can be solved in closed form FIG5. We found that the performance of BBB slowly improved as the number of training epochs increased, but underperformed compared to the exact mean field solution. Moreover, the difference in performance due to the number of training steps dwarfed the difference between the mean field solution and the exact posterior. This suggests that it is not sufficient to partially optimize the variational parameters when the uncertainty estimates directly affect the data being collected. In supervised learning, optimizing to convergence is acceptable, however in the online setting, optimizing to convergence at every step incurs unreasonable computational cost. Expectation-Propagation. The performance of Black Box α-divergence algorithms was poor. Because this class of algorithms is similar to BBB (in fact, as α → 0, it converges to the BBB objective), we suspect that partial convergence was also the cause of their poor performance. We found these algorithms to be sensitive to the number of training steps between actions, requiring a large number to achieve marginal performance. Their terrible performance in the Mushroom bandit is remarkable, while in the other datasets they perform slightly worse than their variational inference counterpart. Given the successes of Black Box α-divergence in other domains BID19, investigating approaches to sidestep the slow convergence of the uncertainty estimates is a promising direction for future work. Monte Carlo. Constant-SGD comes out as the winner on Covertype, which requires non-linearity and exploration as evidenced by performance of the linear baseline approaches TAB0 ). The method is especially appealing as it does not require tuning learning rates or exploration parameters. SGFS, however, performs better on average. The additional injected noise in SGFS may cause the model to explore more and thus perform better, as shown in the Wheel Bandit problem where SGFS strongly outperforms Constant-SGD.Bootstrap. The bootstrap offers significant gains with respect to its parent algorithm (RMS) in several datasets. Note that in Statlog one of the actions is optimal around 80% of the time, and the bootstrapped predictions may help to avoid getting stuck, something from which RMS methods may suffer. In other scenarios, the randomness from SGD may be enough for exploration, and the bootstrap may not offer important benefits. In those cases, it might not justify the heavy computational overhead of the method. We found it surprising that the optimized versions of BootstrappedNN decided to use only q = 2 and q = 3 networks respectively (while we set its value to q = 10 in the manually tuned version, and the extra networks did not improve performance significantly). Unfortunately, Bootstrapped NNs were not able to solve the Wheel problem, and its performance was fairly similar to that of RMS. One possible explanation is that -given the sparsity of the rewardall the bootstrapped networks agreed for the most part, and the algorithm simply got stuck selecting action a 1. As opposed to linear models, reacting to unusual rewards could take Bootstrapped NNs some time as good predictions could be randomly overlooked (and useful data discarded if p 1).Direct Noise Injection. When properly tuned, Parameter-Noise provided an important boost in performance across datasets over the learner that it was based on (RMS), average rank of ParamNoise-SL is 20.9 compared to RMS at 28.7 TAB0. However, we found the algorithm hard to tune and sensitive to the heuristic controlling the injected noise-level. On the synthetic Wheel problem -where exploration is necessary-both parameter-noise and RMS suffer from underexploration and perform similarly, except ParamNoise-MR which does a good job. In addition, developing an intuition for the heuristic is not straightforward as it lacks transparency and a principled grounding, and thus may require repeated access to the decision-making process for tuning. Dropout. We initially experimented with two dropout versions: fixed p = 0.5, and p = 0.8. The latter consistently delivered better , and it is the one we manually picked. The optimized versions of the algorithm provided decent improvements over its base RMS (specially Dropout-MR).In the Wheel problem, dropout performance is somewhat poor: Dropout is outperformed by RMS, while Dropout-MR offers gains with respect to all versions of RMS but it is not competitive with the best algorithms. Overall, the algorithm seems to heavily depend on its hyper-parameters (see cum-regret performance of the raw Dropout, for example). Dropout was used both for training and for decision-making; unfortunately, we did not add a baseline where dropout only applies during training. Consequently, it is not obvious how to disentangle the contribution of better training from that of better exploration. This remains as future work. Bayesian Non-parametrics. Perhaps unsurprisingly, Gaussian processes perform reasonably well on problems with little data but struggle on larger problems. While this motivated the use of sparse GP, the latter was not able to perform similarly to stronger (and definitively simpler) methods. In this work, we empirically studied the impact on performance of approximate model posteriors for decision making via Thompson Sampling in contextual bandits. We found that the most robust methods exactly measured uncertainty (possibly under the wrong model assumptions) on top of complex representations learned in parallel. More complicated approaches that learn the representation and its uncertainty together seemed to require heavier training, an important drawback in online scenarios, and exhibited stronger hyper-parameter dependence. Further exploring and developing the promising approaches is an exciting avenue for future work. Greedy NN approach, fixed learning rate (γ = 0.01). Learning rate decays, and it is reset every training period. Learning rate decays, and it is not reset at all. It starts at γ = 1. RMS Based on RMS3 net. Learning decay rate is 0.55, initial learning rate is 1.0. Trained for ts = 100, t f = 20. Based on RMS3 net. Learning decay rate is 2.5, initial learning rate is 1.0. Trained for ts = 50, t f = 20. Based on RMS3 net. Learning decay rate is 0.4, initial learning rate is 1.1. Trained for ts = 100, t f = 20. SGFS Burning = 500, learning rate γ = 0.014, EMA decay = 0.9, noise σ = 0.75. Takes each action at random with equal probability. BayesByBackprop with noise σ = 0.5. (ts = 100, first 100 times linear decay from ts = 10000). BayesByBackprop with noise σ = 0.75. (ts = 100, first 100 times linear decay from ts = 10000). BayesByBackprop with noise σ = 1.0. (ts = 100, first 100 times linear decay from ts = 10000). Bootstrapped NN Bootstrapped with q = 5 models, and p = 0.85. Based on RMS3 net. Bootstrapped NN2Bootstrapped with q = 5 models, and p = 1.0. Based on RMS3 net. Bootstrapped NN3Bootstrapped with q = 10 models, and p = 1.0. Based on RMS3 net. Dropout (RMS3) Dropout with probability p = 0.8. Based on RMS3 net. Dropout (RMS2) Dropout with probability p = 0.8. Based on RMS2 net. Greedy NN approach, fixed learning rate (γ = 0.01). Learning rate decays, and it is reset every training period. RMS2bSimilar to RMS2, but training for longer (ts = 800). Learning rate decays, and it is not reset at all. Starts at γ = 1. SGFS Burning = 500, learning rate γ = 0.014, EMA decay = 0.9, noise σ = 0.75. ConstSGD Burning = 500, EMA decay = 0.9, noise σ = 0.5. Initial noise σ = 0.01, and level = 0.01. Based on RMS3 net. Trained for longer: ts = 800. Takes each action at random with equal probability. Published as a conference paper at ICLR 2018 Table 4: Cumulative regret incurred by linear models using algorithms in Section 3 on the bandits described in Section A. Values reported are the mean over 50 independent trials with standard error of the mean. Published as a conference paper at ICLR 2018 Table 5: Simple regret incurred by linear models using algorithms in Section 3 on the bandits described in Section A. Simple regret was approximated by averaging the regret over the final 500 steps. Values reported are the mean over 50 independent trials with standard error of the mean. Statlog Covertype Financial Jester Adult Alpha Divergences 0.68 ± 0.04 0.07 ± 0.00 0.31 ± 0.00 0.00 ± 0.00 2.91 ± 0.04 0.75 ± 0.00 Alpha Divergences FORMULA1 1.50 ± 0.05 0.08 ± 0.00 0.31 ± 0.00 0.00 ± 0.00 2.98 ± 0.03 0.75 ± 0.00 Alpha Divergences FORMULA2 1.51 ± 0.05 0.13 ± 0.00 0.32 ± 0.00 0.00 ± 0.00 3.42 ± 0.05 0.77 ± 0.00 Alpha Divergences1.50 ± 0.05 0.08 ± 0.00 0.31 ± 0.00 0.00 ± 0.00 2. 0.07 ± 0.01 0.07 ± 0.00 0.29 ± 0.00 0.01 ± 0.00 2.80 ± 0.03 0.67 ± 0.00 LinGreedy (eps=0.05) 0.24 ± 0.02 0.10 ± 0.00 0.31 ± 0.00 0.06 ± 0.00 2.86 ± 0.03 0.68 ± 0.00 LinPost 0.29 ± 0.03 0.06 ± 0.00 0.28 ± 0.00 0.01 ± 0.00 2.74 ± 0.04 0.69 ± 0.00 LinfullDiagPost 4.10 ± 0.07 0.18 ± 0.00 0.63 ± 0.00 0.00 ± 0.00 2.86 ± 0.03 0.89 ± 0.00 LinfullDiagPrecPost 0.19 ± 0.02 0.05 ± 0.00 0.28 ± 0.00 0.00 ± 0.00 2.82 ± 0.03 0.67 ± 0.00 LinfullPost 0.08 ± 0.01 0.05 ± 0.00 0.28 ± 0.00 0.00 ± 0.00 2.86 ± 0.03 0.67 ± 0.00 Param-Noise 0.49 ± 0.07 0.05 ± 0.00 0.32 ± 0.00 0.01 ± 0.00 2.87 ± 0.04 0.69 ± 0.00 Param-Noise2 0.36 ± 0.05 0.05 ± 0.00 0.33 ± 0.00 0.01 ± 0.00 2.83 ± 0.04 0.69 ± 0.00 Uniform 4.88 ± 0.07 0.86 ± 0.00 0.86 ± 0.00 1.25 ± 0.02 5.03 ± 0.07 0.93 ± 0.00Published as a conference paper at ICLR 2018 Table 6: Cumulative regret incurred by models using algorithms in Section 3 on the bandits described in Section A. Values reported are the mean over 50 independent trials with standard error of the mean. Normalized with respect to the performance of Uniform. Published as a conference paper at ICLR 2018 Table 7: Simple regret incurred by models using algorithms in Section 3 on the bandits described in Section A. Simple regret was approximated by averaging the regret over the final 500 steps. Values reported are the mean over 50 independent trials with standard error of the mean. Normalized with respect to the performance of Uniform. Published as a conference paper at ICLR 2018 Table 8: Elapsed time for algorithms in Section 3 on the bandits described in Section A. Values reported are the mean over 50 independent trials with standard error of the mean. Normalized with respect to the elapsed time required by RMS (which uses t s = 100 and t f = 20).Published as a conference paper at ICLR 2018 Table 9: Cumulative regret incurred on the Wheel Bandit problem with increasing values of δ. Values reported are the mean over 50 independent trials with standard error of the mean. Normalized with respect to the performance of Uniform. We qualitatively compare plots of the sample distribution from various methods, similarly to BID19. We plot the mean and standard deviation of 100 samples drawn from each method conditioned on a small set of observations with three outputs (two are from the same underlying function and thus strongly correlated while the third (bottom) is independent). The true underlying functions are plotted in red. Mushroom. The Mushroom Dataset contains 22 attributes per mushroom, and two classes: poisonous and safe. As in BID6, we create a bandit problem where the agent must decide whether to eat or not a given mushroom. Eating a safe mushroom provides reward +5. Eating a poisonous mushroom delivers reward +5 with probability 1/2 and reward -35 otherwise. If the agent does not eat a mushroom, then the reward is 0. We set n = 50000.Statlog. The Shuttle Statlog Dataset BID2 provides the value of d = 9 indicators during a space shuttle flight, and the goal is to predict the state of the radiator subsystem of the shuttle. There are k = 7 possible states, and if the agent selects the right state, then reward 1 is generated. Otherwise, the agent obtains no reward (r = 0). The most interesting aspect of the dataset is that one action is the optimal one in 80% of the cases, and some algorithms may commit to this action instead of further exploring. In this case, n = 43500.Covertype. The Covertype Dataset BID2 classifies the cover type of northern Colorado forest areas in k = 7 classes, based on d = 54 features, including elevation, slope, aspect, and soil type. Again, the agent obtains reward 1 if the correct class is selected, and 0 otherwise. We run the bandit for n = 150000.Financial. We created the Financial Dataset by pulling the stock prices of d = 21 publicly traded companies in NYSE and Nasdaq, for the last 14 years (n = 3713). For each day, the context was the price difference between the beginning and end of the session for each stock. We synthetically created the arms, to be a linear combination of the contexts, representing k = 8 different potential portfolios. By far, this was the smallest dataset, and many algorithms over-explored at the beginning with no time to amortize their investment (Thompson Sampling does not account for the horizon).Jester. We create a recommendation system bandit problem as follows. The Jester Dataset BID15 ) provides continuous ratings in [−10, 10] for 100 jokes from 73421 users. We find a complete subset of n = 19181 users rating all 40 jokes. Following Riquelme et al. FORMULA1, we take d = 32 of the ratings as the context of the user, and k = 8 as the arms. The agent recommends one joke, and obtains the reward corresponding to the rating of the user for the selected joke. Adult. The Adult Dataset BID24 BID2 comprises personal information from the US Census Bureau database, and the standard prediction task is to determine if a person makes over $50K a year or not. However, we consider the k = 14 different occupations as feasible actions, based on d = 94 covariates (many of them binarized). As in previous datasets, the agent obtains reward 1 for making the right prediction, and 0 otherwise. We set n = 45222.Census. Dataset BID2 contains a number of personal features (age, native language, education...) which we summarize in d = 389 covariates, including binary dummy variables for categorical features. Our goal again is to predict the occupation of the individual among k = 9 classes. The agent obtains reward 1 for making the right prediction, and 0 otherwise, for each of the n = 250000 randomly selected data points. Song. The YearPredictionMSD Dataset is a subset of the Million Song Dataset BID4. The goal is to predict the year a given song was released based on d = 90 technical audio features. We divided the years in k = 10 contiguous year buckets containing the same number of songs, and provided decreasing Gaussian rewards as a function of the distance between the interval chosen by the agent and the one containing the year the song was actually released. We initially selected n = 250000 songs at random from the training set. The Statlog, Covertype, Adult, and Census datasets were tested in BID11.
An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
935
scitldr
Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training. Based on that, we propose Activation Maximization Generative Adversarial Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we demonstrate that, with the Inception ImageNet classifier, Inception Score mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality. We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality. Our proposed model also outperforms the baseline methods in the new metric. Generative adversarial nets (GANs) BID7 as a new way for learning generative models, has recently shown promising in various challenging tasks, such as realistic image generation BID17 BID26 BID9, conditional image generation BID12 BID2 BID13, image manipulation ) and text generation BID25.Despite the great success, it is still challenging for the current GAN models to produce convincing samples when trained on datasets with high variability, even for image generation with low resolution, e.g., CIFAR-10. Meanwhile, people have empirically found taking advantages of class labels can significantly improve the sample quality. There are three typical GAN models that make use of the label information: CatGAN BID20 builds the discriminator as a multi-class classifier; LabelGAN BID19 extends the discriminator with one extra class for the generated samples; AC-GAN BID18 jointly trains the real-fake discriminator and an auxiliary classifier for the specific real classes. By taking the class labels into account, these GAN models show improved generation quality and stability. However, the mechanisms behind them have not been fully explored BID6.In this paper, we mathematically study GAN models with the consideration of class labels. We derive the gradient of the generator's loss w.r.t. class logits in the discriminator, named as class-aware gradient, for LabelGAN BID19 and further show its gradient tends to guide each generated sample towards being one of the specific real classes. Moreover, we show that AC-GAN BID18 can be viewed as a GAN model with hierarchical class discriminator. Based on the analysis, we reveal some potential issues in the previous methods and accordingly propose a new method to resolve these issues. Specifically, we argue that a model with explicit target class would provide clearer gradient guidance to the generator than an implicit target class model like that in BID19. Comparing with BID18, we show that introducing the specific real class logits by replacing the overall real class logit in the discriminator usually works better than simply training an auxiliary classifier. We argue that, in BID18, adversarial training is missing in the auxiliary classifier, which would make the model more likely to suffer mode collapse and produce low quality samples. We also experimentally find that predefined label tends to in intra-class mode collapse and correspondingly propose dynamic labeling as a solution. The proposed model is named as Activation Maximization Generative Adversarial Networks (AM-GAN). We empirically study the effectiveness of AM-GAN with a set of controlled experiments and the are consistent with our analysis and, note that, AM-GAN achieves the state-of-the-art Inception Score (8.91) on CIFAR-10.In addition, through the experiments, we find the commonly used metric needs further investigation. In our paper, we conduct a further study on the widely-used evaluation metric Inception Score BID19 and its extended metrics. We show that, with the Inception Model, Inception Score mainly tracks the diversity of generator, while there is no reliable evidence that it can measure the true sample quality. We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality as its compensation. In terms of AM Score, our proposed method also outperforms other strong baseline methods. The rest of this paper is organized as follows. In Section 2, we introduce the notations and formulate the LabelGAN BID19 and AC-GAN * BID18 ) as our baselines. We then derive the class-aware gradient for LabelGAN, in Section 3, to reveal how class labels help its training. In Section 4, we reveal the overlaid-gradient problem of LabelGAN and propose AM-GAN as a new solution, where we also analyze the properties of AM-GAN and build its connections to related work. In Section 5, we introduce several important extensions, including the dynamic labeling as an alternative of predefined labeling (i.e., class condition), the activation maximization view and a technique for enhancing the AC-GAN *. We study Inception Score in Section 6 and accordingly propose a new metric AM Score. In Section 7, we empirically study AM-GAN and compare it to the baseline models with different metrics. Finally we conclude the paper and discuss the future work in Section 8. In the original GAN formulation BID7, the loss functions of the generator G and the discriminator D are given as: DISPLAYFORM0 where D performs binary classification between the real and the generated samples and D r (x) represents the probability of the sample x coming from the real data. The framework (see Eq.) has been generalized to multi-class case where each sample x has its associated class label y ∈ {1, . . ., K, K+1}, and the K+1 th label corresponds to the generated samples BID19. Its loss functions are defined as: DISPLAYFORM0 DISPLAYFORM1 where D i (x) denotes the probability of the sample x being class i. The loss can be written in the form of cross-entropy, which will simplify our later analysis: DISPLAYFORM2 DISPLAYFORM3 where DISPLAYFORM4 H is the cross-entropy, defined as H(p, q)=− i p i log q i. We would refer the above model as LabelGAN (using class labels) throughout this paper. Besides extending the original two-class discriminator as discussed in the above section, BID18 proposed an alternative approach, i.e., AC-GAN, to incorporate class label information, which introduces an auxiliary classifier C for real classes in the original GAN framework. With the core idea unchanged, we define a variant of AC-GAN as the following, and refer it as AC-GAN *: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 where D r (x) and D f (x) = 1 − D r (x) are outputs of the binary discriminator which are the same as vanilla GAN, u(·) is the vectorizing operator that is similar to v(·) but defined with K classes, and C(x) is the probability distribution over K real classes given by the auxiliary classifier. In AC-GAN, each sample has a coupled target class y, and a loss on the auxiliary classifier w.r.t. y is added to the generator to leverage the class label information. We refer the losses on the auxiliary classifier, i.e., Eq. FORMULA7 and FORMULA9, as the auxiliary classifier losses. The above formulation is a modified version of the original AC-GAN. Specifically, we omit the auxiliary classifier loss E (x,y)∼G [H(u(y), C(x))] which encourages the auxiliary classifier C to classify the fake sample x to its target class y. Further discussions are provided in Section 5.3. Note that we also adopt the − log(D r (x)) loss in generator. In this section, we introduce the class-aware gradient, i.e., the gradient of the generator's loss w.r.t. class logits in the discriminator. By analyzing the class-aware gradient of LabelGAN, we find that the gradient tends to refine each sample towards being one of the classes, which sheds some light on how the class label information helps the generator to improve the generation quality. Before delving into the details, we first introduce the following lemma on the gradient properties of the cross-entropy loss to make our analysis clearer. Lemma 1. With l being the logits vector and σ being the softmax function, let σ(l) be the current softmax probability distribution andp denote the target probability distribution, then DISPLAYFORM0 For a generated sample x, the loss in LabelGAN is L lab DISPLAYFORM1, as defined in Eq.. With Lemma 1, the gradient of L lab G (x) w.r.t. the logits vector l(x) is given as: DISPLAYFORM2 With the above equations, the gradient of L lab G (x) w.r.t. x is: DISPLAYFORM3 where Figure 1: An illustration of the overlaid-gradient problem. When two or more classes are encouraged at the same time, the combined gradient may direct to none of these classes. It could be addressed by assigning each generated sample a specific target class instead of the overall real class. DISPLAYFORM4 From the formulation, we find that the overall gradient w.r.t. a generated example x is 1−D r (x), which is the same as that in vanilla GAN BID7. And the gradient on real classes is further distributed to each specific real class logit l k (x) according to its current probability ratio DISPLAYFORM5 Dr(x). As such, the gradient naturally takes the label information into consideration: for a generated sample, higher probability of a certain class will lead to a larger step towards the direction of increasing the corresponding confidence for the class. Hence, individually, the gradient from the discriminator for each sample tends to refine it towards being one of the classes in a probabilistic sense. That is, each sample in LabelGAN is optimized to be one of the real classes, rather than simply to be real as in the vanilla GAN. We thus regard LabelGAN as an implicit target class model. Refining each generated sample towards one of the specific classes would help improve the sample quality. Recall that there are similar inspirations in related work. BID4 showed that the could be significantly better if GAN is trained with separated classes. And AC-GAN BID18 introduces an extra loss that forces each sample to fit one class and achieves a better . In LabelGAN, the generator gets its gradients from the K specific real class logits in discriminator and tends to refine each sample towards being one of the classes. However, LabelGAN actually suffers from the overlaid-gradient problem: all real class logits are encouraged at the same time. Though it tends to make each sample be one of these classes during the training, the gradient of each sample is a weighted averaging over multiple label predictors. As illustrated in Figure 1, the averaged gradient may be towards none of these classes. In multi-exclusive classes setting, each valid sample should only be classified to one of classes by the discriminator with high confidence. One way to resolve the above problem is to explicitly assign each generated sample a single specific class as its target. Assigning each sample a specific target class y, the loss functions of the revised-version LabelGAN can be formulated as: DISPLAYFORM0 DISPLAYFORM1 where v(y) is with the same definition as in Section 2.1. The model with aforementioned formulation is named as Activation Maximization Generative Adversarial Networks (AM-GAN) in our paper. And the further interpretation towards naming will be in Section 5.2. The only difference between AM-GAN and LabelGAN lies in the generator's loss function. Each sample in AM-GAN has a specific target class, which resolves the overlaid-gradient problem. AC-GAN BID18 ) also assigns each sample a specific target class, but we will show that the AM-GAN and AC-GAN are substantially different in the following part of this section. * is a combination of vanilla GAN and auxiliary classifier. AM-GAN can naturally conduct adversarial training among all the classes, while in AC-GAN *, adversarial training is only conducted at the real-fake level and missing in the auxiliary classifier. Both LabelGAN and AM-GAN are GAN models with K+1 classes. We introduce the following cross-entropy decomposition lemma to build their connections to GAN models with two classes and the K-classes models (i.e., the auxiliary classifiers). DISPLAYFORM0 With Lemma 2, the loss function of the generator in AM-GAN can be decomposed as follows: DISPLAYFORM1 The second term of Eq. actually equals to the loss function of the generator in LabelGAN: DISPLAYFORM2 Similar analysis can be adapted to the first term and the discriminator. Note that v r (x) equals to one. Interestingly, we find by decomposing the AM-GAN losses, AM-GAN can be viewed as a combination of LabelGAN and auxiliary classifier (defined in Section 2.2). From the decomposition perspective, disparate to AM-GAN, AC-GAN is a combination of vanilla GAN and the auxiliary classifier. The auxiliary classifier loss in Eq. FORMULA0 can also be viewed as the cross-entropy version of generator loss in CatGAN: the generator of CatGAN directly optimizes entropy H(R(D(x))) to make each sample have a high confidence of being one of the classes, while AM-GAN achieves this by the first term of its decomposed loss H(R(v(x)), R(D(x))) in terms of cross-entropy with given target distribution. That is, the AM-GAN is the combination of the cross-entropy version of CatGAN and LabelGAN. We extend the discussion between AM-GAN and CatGAN in the Appendix B. With the Lemma 2, we can also reformulate the AC-GAN * as a K+1 classes model. Take the generator's loss function as an example: DISPLAYFORM0 In the K+1 classes model, the K+1 classes distribution is formulated as DISPLAYFORM1 AC-GAN introduces the auxiliary classifier in the consideration of leveraging the side information of class label, it turns out that the formulation of AC-GAN * can be viewed as a hierarchical K+1 classes model consists of a two-class discriminator and a K-class auxiliary classifier, as illustrated in FIG1. Conversely, AM-GAN is a non-hierarchical model. All K+1 classes stay in the same level of the discriminator in AM-GAN.In the hierarchical model AC-GAN *, adversarial training is only conducted at the real-fake twoclass level, while misses in the auxiliary classifier. Adversarial training is the key to the theoretical guarantee of global convergence p G = p data. Taking the original GAN formulation as an instance, if generated samples collapse to a certain point x, i.e., p G (x) > p data (x), then there must exit another point x with p G (x) < p data (x). Given the optimal D(x) = pdata (x) pG(x)+pdata(x), the collapsed point x will get a relatively lower score. And with the existence of higher score points (e.g. x), maximizing the generator's expected score, in theory, has the strength to recover from the mode-collapsed state. In practice, the p G and p data are usually disjoint, nevertheless, the general behaviors stay the same: when samples collapse to a certain point, they are more likely to get a relatively lower score from the adversarial network. Without adversarial training in the auxiliary classifier, a mode-collapsed generator would not get any penalties from the auxiliary classifier loss. In our experiments, we find AC-GAN is more likely to get mode-collapsed, and it was empirically found reducing the weight (such as 0.1 used in BID9) of the auxiliary classifier losses would help. In Section 5.3, we introduce an extra adversarial training in the auxiliary classifier with which we improve AC-GAN *'s training stability and sample-quality in experiments. On the contrary, AM-GAN, as a non-hierarchical model, can naturally conduct adversarial training among all the class logits. In the above section, we simply assume each generated sample has a target class. One possible solution is like AC-GAN BID18, predefining each sample a class label, which substantially in a conditional GAN. Actually, we could assign each sample a target class according to its current probability estimated by the discriminator. A natural choice could be the class which is of the maximal probability currently: y(x) argmax i∈{1,...,K} D i (x) for each generated sample x. We name this dynamic labeling. According to our experiments, dynamic labeling brings important improvements to AM-GAN, and is applicable to other models that require target class for each generated sample, e.g. AC-GAN, as an alternative to predefined labeling. We experimentally find GAN models with pre-assigned class label tend to encounter intra-class mode collapse. In addition, with dynamic labeling, the GAN model remains generating from pure random noises, which has potential benefits, e.g. making smooth interpolation across classes in the latent space practicable. Activation maximization is a technique which is traditionally applied to visualize the neuron(s) of pretrained neural networks BID16 b; BID5 ).The GAN training can be viewed as an Adversarial Activation Maximization Process. To be more specific, the generator is trained to perform activation maximization for each generated sample on the neuron that represents the log probability of its target class, while the discriminator is trained to distinguish generated samples and prevents them from getting their desired high activation. It is worth mentioning that the sample that maximizes the activation of one neuron is not necessarily of high quality. Traditionally people introduce various priors to counter the phenomenon BID16 b). In GAN, the adversarial process of GAN training can detect unrealistic samples and thus ensures the high-activation is achieved by high-quality samples that strongly confuse the discriminator. We thus name our model the Activation Maximization Generative Adversarial Network (AM-GAN). Experimentally we find AC-GAN easily get mode collapsed and a relatively low weight for the auxiliary classifier term in the generator's loss function would help. In the Section 4.3, we attribute mode collapse to the miss of adversarial training in the auxiliary classifier. From the adversarial activation maximization view: without adversarial training, the auxiliary classifier loss that requires high activation on a certain class, cannot ensure the sample quality. That is, in AC-GAN, the vanilla GAN loss plays the role for ensuring sample quality and avoiding mode collapse. Here we introduce an extra loss to the auxiliary classifier in AC-GAN * to enforce adversarial training and experimentally find it consistently improve the performance: DISPLAYFORM0 where u(·) represents the uniform distribution, which in spirit is the same as CatGAN BID20.Recall that we omit the auxiliary classifier loss E (x,y)∼G H u(y)] in AC-GAN *. According to our experiments, E (x,y)∼G [H(u(y)] does improve AC-GAN *'s stability and make it less likely to get mode collapse, but it also leads to a worse Inception Score. We will report the detailed in Section 7. Our understanding on this phenomenon is that: by encouraging the auxiliary classifier also to classify fake samples to their target classes, it actually reduces the auxiliary classifier's ability on providing gradient guidance towards the real classes, and thus also alleviates the conflict between the GAN loss and the auxiliary classifier loss. One of the difficulties in generative models is the evaluation methodology BID22. In this section, we conduct both the mathematical and the empirical analysis on the widely-used evaluation metric Inception Score BID19 and other relevant metrics. We will show that Inception Score mainly works as a diversity measurement and we propose the AM Score as a compensation to Inception Score for estimating the generated sample quality. As a recently proposed metric for evaluating the performance of generative models, Inception Score has been found well correlated with human evaluation BID19, where a publiclyavailable Inception model C pre-trained on ImageNet is introduced. By applying the Inception model to each generated sample x and getting the corresponding class probability distribution C(x), Inception Score is calculated via DISPLAYFORM0 where E x is short of E x∼G andC DISPLAYFORM1 is the overall probability distribution of the generated samples over classes, which is judged by C, and KL denotes the Kullback-Leibler divergence. As proved in Appendix D, E x KL C(x) C G can be decomposed into two terms in entropy: DISPLAYFORM2 A common understanding of how Inception Score works lies in that a high score in the first term H(C G) indicates the generated samples have high diversity (the overall class probability distribution evenly distributed), and a high score in the second term −E x [H(C(x))] indicates that each individual sample has high quality (each generated sample's class probability distribution is sharp, i.e., it can be classified into one of the real classes with high confidence) BID19.However, taking CIFAR-10 as an illustration, the data are not evenly distributed over the classes under the Inception model trained on ImageNet, which is presented in FIG3. It makes Inception Score problematic in the view of the decomposed scores, i.e., H(C G) and −E x [H(C(x))]. Such as that one would ask whether a higher H(C G) indicates a better mode coverage and whether a smaller H(C(x)) indicates a better sample quality. DISPLAYFORM0. A common understanding of Inception Score is that: the value of H(C G) measures the diversity of generated samples and is expected to increase in the training process. However, it usually tends to decrease in practice as illustrated in (c). (C(x) ) score of CIFAR-10 training data is variant, which means, even in real data, it would still strongly prefer some samples than some others. H(C(x)) on a classifier that pre-trained on CIFAR-10 has low values for all CIFAR-10 training data and thus can be used as an indicator of sample quality. We experimentally find that, as in FIG2, the value of H(C G) is usually going down during the training process, however, which is expected to increase. And when we delve into the detail of H(C(x)) for each specific sample in the training data, we find the value of H(C(x)) score is also variant, as illustrated in FIG3, which means, even in real data, it would still strongly prefer some samples than some others. The exp operator in Inception Score and the large variance of the value of H(C(x)) aggravate the phenomenon. We also observe the preference on the class level in FIG3, e.g., E x [H(C(x))]=2.14 for trucks, while E x [H(C(x))]=3.80 for birds. It seems, for an ImageNet Classifier, both the two indicators of Inception Score cannot work correctly. Next we will show that Inception Score actually works as a diversity measurement. Since the two individual indicators are strongly correlated, here we go back to Inception Score's original formulation E x [KL(C(x) C G )]. In this form, we could interpret Inception Score as that it requires each sample's distribution C(x) highly different from the overall distribution of the generator C G, which indicates a good diversity over the generated samples. As is empirically observed, a mode-collapsed generator usually gets a low Inception Score. In an extreme case, assuming all the generated samples collapse to a single point, then C(x)=C G and we would get the minimal Inception Score 1.0, which is the exp of zero. To simulate mode collapse in a more complicated case, we design synthetic experiments as following: given a set of N points {x 0, x 1, x 2, ..., x N −1}, with each point x i adopting the distribution C(x i) = v(i) and representing class i, where v(i) is the vectorization operator of length N, as defined in Section 2.1, we randomly drop m points, evaluate E x [KL(C(x) C G )] and draw the curve. As is showed in Figure 5, when N − m increases, the value of E x [KL(C(x) C G )] monotonically increases in general, which means that it can well capture the mode dropping and the diversity of the generated distributions. DISPLAYFORM0 All of them works properly (going down) in the training process. One remaining question is that whether good mode coverage and sample diversity mean high quality of the generated samples. From the above analysis, we do not find any evidence. A possible explanation is that, in practice, sample diversity is usually well correlated with the sample quality. 1 is generated, cannot be detected by E x [KL(C(x) C G )] score. It means that with an accordingly pretrained classifier, E x [KL(C(x) C G )] score cannot detect intra-class level mode collapse. This also explains why the Inception Network on ImageNet could be a good candidate C for CIFAR-10. Exploring the optimal C is a challenge problem and we shall leave it as a future work. However, there is no evidence that using an Inception Network trained on ImageNet can accurately measure the sample quality, as shown in Section 6.2. To compensate Inception Score, we propose to introduce an extra assessment using an accordingly pretrained classifier. In the accordingly pretrained classifier, most real samples share similar H(C(x)) and 99.6% samples hold scores less than 0.05 as showed in FIG3, which demonstrates that H(C(x)) of the classifier can be used as an indicator of sample quality. G is actually problematic when training data is not evenly distributed over classes, for that argmin H(C G) is a uniform distribution. To take theC train into account, we replace H(C G) with a KL divergence betweenC train andC G. So that DISPLAYFORM0 which requiresC G close toC train and each sample x has a low entropy C(x). The minimal value of AM Score is zero, and the smaller value, the better. A sample training curve of AM Score is showed in Figure 6, where all indicators in AM Score work as expected. 1 1 Inception Score and AM Score measure the diversity and quality of generated samples, while FID BID10 measures the distance between the generated distribution and the real distribution. Inception Score AM Score TAB0 7.04 ± 0.06 7.27 ± 0.07 --0.45 ± 0.00 0.43 ± 0.00 --GAN * 7.25 ± 0.07 7.31 ± 0.10 --0.40 ± 0.00 0.41 ± 0.00 --AC-GAN * 7.41 ± 0.09 7.79 ± 0.08 7.28 ± 0.07 7.89 ± 0.11 0.17 ± 0.00 0.16 ± 0.00 1.64 ± 0.02 1.01 ± 0.01 AC-GAN * + 8.56 ± 0.11 8.01 ± 0.09 10.25 ± 0.14 8.23 ± 0.10 0.10 ± 0.00 0.14 ± 0.00 1.04 ± 0.01 1.20 ± 0.01 LabelGAN 8.63 ± 0.08 7.88 ± 0.07 10.82 ± 0.16 8.62 ± 0.11 0.13 ± 0.00 0.25 ± 0.00 1.11 ± 0.01 1.37 ± 0.01 AM-GAN 8.83 ± 0.09 8.35 ± 0.12 11.45 ± 0.15 9.55 ± 0.11 0.08 ± 0.00 0.05 ± 0.00 0.88 ± 0.01 0.61 ± 0.01 To empirically validate our analysis and the effectiveness of the proposed method, we conduct experiments on the image benchmark datasets including CIFAR-10 and Tiny-ImageNet 2 which comprises 200 classes with 500 training images per class. For evaluation, several metrics are used throughout our experiments, including Inception Score with the ImageNet classifier, AM Score with a corresponding pretrained classifier for each dataset, which is a DenseNet BID11 model. We also follow BID18 and use the mean MS-SSIM BID23 of randomly chosen pairs of images within a given class, as a coarse detector of intra-class mode collapse. A modified DCGAN structure, as listed in the Appendix F, is used in experiments. Visual of various models are provided in the Appendix considering the page limit, such as Figure 9, etc. The repeatable experiment code is published for further research 3. The first question is whether training an auxiliary classifier without introducing correlated losses to the generator would help improve the sample quality. In other words, with the generator only with the GAN loss in the AC-GAN * setting. (referring as GAN *)As is shown in TAB0, it improves GAN's sample quality, but the improvement is limited comparing to the other methods. It indicates that introduction of correlated loss plays an essential role in the remarkable improvement of GAN training. The usage of the predefined label would make the GAN model transform to its conditional version, which is substantially disparate with generating samples from pure random noises. In this experiment, we use dynamic labeling for AC-GAN *, AC-GAN * + and AM-GAN to seek for a fair comparison among different discriminator models, including LabelGAN and GAN. We keep the network structure and hyper-parameters the same for different models, only difference lies in the output layer of the discriminator, i.e., the number of class logits, which is necessarily different across models. As is shown in TAB0, AC-GAN * achieves improved sample quality over vanilla GAN, but sustains mode collapse indicated by the value 0.61 in MS-SSIM as in TAB1. By introducing adversarial Model Score ± Std. DFM BID24 7.72 ± 0.13 Improved GAN BID19 8.09 ± 0.07 AC-GAN BID18 8.25 ± 0.07 WGAN-GP + AC BID9 8.42 ± 0.10 SGAN BID12 8.59 ± 0.12 AM-GAN (our work)8.91 ± 0.11 Splitting GAN BID8 8.87 ± 0.09 Real data 11.24 ± 0.12 Table 3: Inception Score comparison on CIFAR-10. Splitting GAN uses the class splitting technique to enhance the class label information, which is orthogonal to AM-GAN.training in the auxiliary classifier, AC-GAN * + outperforms AC-GAN *. As an implicit target class model, LabelGAN suffers from the overlaid-gradient problem and achieves a relatively higher per sample entropy (0.124) in the AM Score, comparing to explicit target class model AM-GAN (0.079) and AC-GAN * + (0.102). In the table, our proposed AM-GAN model reaches the best scores against these baselines. We also test AC-GAN * with decreased weight on auxiliary classifier losses in the generator FORMULA0 relative to the GAN loss). It achieves 7.19 in Inception Score, 0.23 in AM Score and 0.35 in MS-SSIM. The 0.35 in MS-SSIM indicates there is no obvious mode collapse, which also conform with our above analysis. AM-GAN achieves Inception Score 8.83 in the previous experiments, which significantly outperforms the baseline models in both our implementation and their reported scores as in Table 3. By further enhancing the discriminator with more filters in each layer, AM-GAN also outperforms the orthogonal work BID8 ) that enhances the class label information via class splitting. As the , AM-GAN achieves the state-of-the-art Inception Score 8.91 on CIFAR-10. It's found in our experiments that GAN models with class condition (predefined labeling) tend to encounter intra-class mode collapse (ignoring the noise), which is obvious at the very beginning of GAN training and gets exasperated during the process. In the training process of GAN, it is important to ensure a balance between the generator and the discriminator. With the same generator's network structures and switching from dynamic labeling to class condition, we find it hard to hold a good balance between the generator and the discriminator: to avoid the initial intra-class mode collapse, the discriminator need to be very powerful; however, it usually turns out the discriminator is too powerful to provide suitable gradients for the generator and in poor sample quality. Nevertheless, we find a suitable discriminator and conduct a set of comparisons with it. The can be found in TAB0. The general is similar to the above, AC-GAN * + still outperforms AC-GAN * and our AM-GAN reaches the best performance. It's worth noticing that the AC-GAN * does not suffer from mode collapse in this setting. In the class conditional version, although with fine-tuned parameters, Inception Score is still relatively low. The explanation could be that, in the class conditional version, the sample diversity still tends to decrease, even with a relatively powerful discriminator. With slight intra-class mode collapse, the per-sample-quality tends to improve, which in a lower AM Score. A supplementary evidence, not very strict, of partial mode collapse in the experiments is that: the | ∂z | is around 45.0 in dynamic labeling setting, while it is 25.0 in the conditional version. The LabelGAN does not need explicit labels and the model is the same in the two experiment settings. But please note that both Inception Score and the AM Score get worse in the conditional version. The only difference is that the discriminator becomes more powerful with an extended layer, which attests that the balance between the generator and discriminator is crucial. We find that, without the concern of intra-class mode collapse, using the dynamic labeling makes the balance between generator and discriminator much easier. DISPLAYFORM0 Note that we report of the modified version of AC-GAN, i.e., AC-GAN * in TAB0. If we take the omitted loss E (x,y)∼G [H(u(y), C(x))] back to AC-GAN *, which leads to the original AC-GAN (see Section 2.2), it turns out to achieve worse on both Inception Score and AM Score on CIFAR-10, though dismisses mode collapse. Specifically, in dynamic labeling setting, Inception Score decreases from 7.41 to 6.48 and the AM Score increases from 0.17 to 0.43, while in predefined class setting, Inception Score decreases from 7.79 to 7.66 and the AM Score increases from 0.16 to 0.20.This performance drop might be because we use different network architectures and hyper-parameters from AC-GAN BID18. But we still fail to achieve its report Inception Score, i.e., 8.25, on CIFAR-10 when using the reported hyper-parameters in the original paper. Since they do not publicize the code, we suppose there might be some unreported details that in the performance gap. We would leave further studies in future work. We plot the training curve in terms of Inception Score and AM Score in FIG6. Inception Score and AM Score are evaluated with the same number of samples 50k, which is the same as BID19. Comparing with Inception Score, AM Score is more stable in general. With more samples, Inception Score would be more stable, however the evaluation of Inception Score is relatively costly. A better alternative of the Inception Model could help solve this problem. The AC-GAN *'s curves appear stronger jitter relative to the others. It might relate to the counteract between the auxiliary classifier loss and the GAN loss in the generator. Another observation is that the AM-GAN in terms of Inception Score is comparable with LabelGAN and AC-GAN * + at the beginning, while in terms of AM Score, they are quite distinguishable from each other. In the CIFAR-10 experiments, the are consistent with our analysis and the proposed method outperforms these strong baselines. We demonstrate that the can be generalized with experiments in another dataset Tiny-ImageNet. The Tiny-ImageNet consists with more classes and fewer samples for each class than CIFAR-10, which should be more challenging. We downsize Tiny-ImageNet samples from 64×64 to 32×32 and simply leverage the same network structure that used in CIFAR-10, and the experiment is showed also in TAB0. From the comparison, AM-GAN still outperforms other methods remarkably. And the AC-GAN * + gains better performance than AC-GAN *. In this paper, we analyze current GAN models that incorporate class label information. Our analysis shows that: LabelGAN works as an implicit target class model, however it suffers from the overlaidgradient problem at the meantime, and explicit target class would solve this problem. We demonstrate that introducing the class logits in a non-hierarchical way, i.e., replacing the overall real class logit in the discriminator with the specific real class logits, usually works better than simply supplementing an auxiliary classifier, where we provide an activation maximization view for GAN training and highlight the importance of adversarial training. In addition, according to our experiments, predefined labeling tends to lead to intra-class mode collapsed, and we propose dynamic labeling as an alternative. Our extensive experiments on benchmarking datasets validate our analysis and demonstrate our proposed AM-GAN's superior performance against strong baselines. Moreover, we delve deep into the widelyused evaluation metric Inception Score, reveal that it mainly works as a diversity measurement. And we also propose AM Score as a compensation to more accurately estimate the sample quality. In this paper, we focus on the generator and its sample quality, while some related work focuses on the discriminator and semi-supervised learning. For future work, we would like to conduct empirical studies on discriminator learning and semi-supervised learning. We extend AM-GAN to unlabeled data in the Appendix C, where unsupervised and semi-supervised is accessible in the framework of AM-GAN. The classifier-based evaluation metric might encounter the problem related to adversarial samples, which requires further study. Combining AM-GAN with Integral Probability Metric based GAN models such as Wasserstein GAN could also be a promising direction since it is orthogonal to our work. DISPLAYFORM0 Label smoothing that avoiding extreme logits value was showed to be a good regularization BID21. A general version of label smoothing could be: modifying the target probability of discriminator) BID19 proposed to use only one-side label smoothing. That is, to only apply label smoothing for real samples: λ 1 = 0 and λ 2 > 0. The reasoning of one-side label smoothing is applying label smoothing on fake samples will lead to fake mode on data distribution, which is too obscure. DISPLAYFORM1 We will next show the exact problems when applying label smoothing to fake samples along with the log(1−D r (x)) generator loss, in the view of gradient w.r.t. class logit, i.e., the class-aware gradient, and we will also show that the problem does not exist when using the − log(D r (x)) generator loss. DISPLAYFORM2 The log(1−D r (x)) generator loss with label smoothing in terms of cross-entropy is DISPLAYFORM3 with lemma 1, its negative gradient is DISPLAYFORM4 DISPLAYFORM5 Gradient vanishing is a well know training problem of GAN. Optimizing D r (x) towards 0 or 1 is also not what desired, because the discriminator is mapping real samples to the distribution with DISPLAYFORM6 The − log(D r (x)) generator loss with target [1−λ, λ] in terms of cross-entropy is DISPLAYFORM7 the negative gradient of which is DISPLAYFORM8 DISPLAYFORM9 Without label smooth λ, the − log(D r (x)) always * preserves the same gradient direction as log(1−D r (x)) though giving a difference gradient scale. We must note that non-zero gradient does not mean that the gradient is efficient or valid. The both-side label smoothed version has a strong connection to Least-Square GAN BID15: with the fake logit fixed to zero, the discriminator maps real to α on the real logit and maps fake to β on the real logit, the generator in contrast tries to map fake sample to α. Their gradient on the logit are also similar. The auxiliary classifier loss of AM-GAN can also be viewed as the cross-entropy version of CatGAN: generator of CatGAN directly optimizes entropy H(R(D(x))) to make each sample be one class, while AM-GAN achieves this by the first term of its decomposed loss H(R(v(x)), R(D(x))) in terms of cross-entropy with given target distribution. That is, the AM-GAN is the cross-entropy version of CatGAN that is combined with LabelGAN by introducing an additional fake class. The discriminator of CatGAN maximizes the prediction entropy of each fake sample: DISPLAYFORM0 In AM-GAN, as we have an extra class on fake, we can achieve this in a simpler manner by minimizing the probability on real logits. DISPLAYFORM1 If v r (K+1) is not zero, that is, when we did negative label smoothing BID19, we could define R(v(K+1)) to be a uniform distribution. DISPLAYFORM2 As a , the label smoothing part probability will be required to be uniformly distributed, similar to CatGAN. In this section, we extend AM-GAN to unlabeled data. Our solution is analogous to. Under semi-supervised setting, we can add the following loss to the original solution to integrate the unlabeled data (with the distribution denoted as p unl (x)): DISPLAYFORM0 C.2 UNSUPERVISED SETTING Under unsupervised setting, we need to introduce one extra loss, analogy to categorical: DISPLAYFORM1 where the p ref is a reference label distribution for the prediction on unsupervised data. For example, p ref could be set as a uniform distribution, which requires the unlabeled data to make use of all the candidate class logits. This loss can be optionally added to semi-supervised setting, where the p ref could be defined as the predicted label distribution on the labeled training data E x∼pdata [D(x)]. As a recently proposed metric for evaluating the performance of the generative models, the InceptionScore has been found well correlated with human evaluation BID19, where a pre-trained publicly-available Inception model C is introduced. By applying the Inception model to each generated sample x and getting the corresponding class probability distribution C(x), Inception Score is calculated via Inception Score = exp E x KL C(x) C G,where E x is short of E x∼G andC G = E x [C(x)] is the overall probability distribution of the generated samples over classes, which is judged by C, and KL denotes the Kullback-Leibler divergence which is defined as KL(p q) = i p i log pi qi = i p i log p i − i p i log q i = −H(p) + H(p, q).An extended metric, the Mode Score, is proposed in BID3 to take the prior distribution of the labels into account, which is calculated via DISPLAYFORM0 where the overall class distribution from the training dataC train has been added as a reference. We show in the following that, in fact, Mode Score and Inception Score are equivalent. Lemma 3. Let p(x) be the class probability distribution of the sample x, andp denote another probability distribution, then Ex H p(x),p = H Ex p(x),p.With Lemma 3, we have log(Inception Score) DISPLAYFORM1 log(Mode Score) DISPLAYFORM2
Understand how class labels help GAN training. Propose a new evaluation metric for generative models.
936
scitldr
Modern neural networks are over-parametrized. In particular, each rectified linear hidden unit can be modified by a multiplicative factor by adjusting input and out- put weights, without changing the rest of the network. Inspired by the Sinkhorn-Knopp algorithm, we introduce a fast iterative method for minimizing the l2 norm of the weights, equivalently the weight decay regularizer. It provably converges to a unique solution. Interleaving our algorithm with SGD during training improves the test accuracy. For small batches, our approach offers an alternative to batch- and group- normalization on CIFAR-10 and ImageNet with a ResNet-18. Deep Neural Networks (DNNs) have achieved outstanding performance across a wide range of empirical tasks such as image classification BID1, image segmentation , speech recognition (a), natural language processing or playing the game of Go BID16. These successes have been driven by the availability of large labeled datasets such as ImageNet BID13, increasing computational power and the use of deeper models (b).Although the expressivity of the function computed by a neural network grows exponentially with depth BID12 ), in practice deep networks are vulnerable to both over-and underfitting (; BID1 b). Widely used techniques to prevent DNNs from overfitting include regularization methods such as weight decay BID2, Dropout (b) and various data augmentation schemes BID1 BID17 BID19 b). Underfitting can occur if the network gets stuck in a local minima, which can be avoided by using stochastic gradient descent algorithms (; ; BID18 BID0, sometimes along with carefully tuned learning rate schedules (b;).Training deep networks is particularly challenging due to the vanishing/exploding gradient problem. It has been studied for Recurrent Neural networks (RNNs) as well as standard feedforward networks (a; BID7 . After a few iterations, the gradients computed during backpropagation become either too small or too large, preventing the optimization scheme from converging. This is alleviated by using non-saturating activation functions such as rectified linear units (ReLUs) BID1 or better initialization schemes preserving the variance of the input across layers (; BID7 a). Failure modes that prevent the training from starting have been theoretically studied by.Two techniques in particular have allowed vision models to achieve "super-human" accuracy. Batch Normalization (BN) was developed to train Inception networks . It introduces intermediate layers that normalize the features by the mean and variance computed within the current batch. BN is effective in reducing training time, provides better generalization capabilities after training and diminishes the need for a careful initialization. Network architectures such as ResNet (b) and DenseNet use skip connections along with BN to improve the information flow during both the forward and backward passes. DISPLAYFORM0 Figure 1: Matrices W k and W k+1 are updated by multiplying the columns of the first matrix with rescaling coefficients. The rows of the second matrix are inversely rescaled to ensure that the product of the two matrices is unchanged. The rescaling coefficients are strictly positive to ensure functional equivalence when the matrices are interleaved with ReLUs. This rescaling is applied iteratively to each pair of adjacent matrices. In this paper, we address the more complex cases of biases, convolutions, max-pooling or skip-connections to be able to balance modern CNN architectures. However, BN has some limitations. In particular, BN only works well with sufficiently large batch sizes . For sizes below 16 or 32, the batch statistics have a high variance and the test error increases significantly. This prevents the investigation of highercapacity models because large, memory-consuming batches are needed in order for BN to work in its optimal range. In many use cases, including video recognition and image segmentation , the batch size restriction is even more challenging because the size of the models allows for only a few samples per batch. Another restriction of BN is that it is computationally intensive, typically consuming 20% to 30% of the training time. Variants such as Group Normalization (GN) cover some of the failure modes of BN.In this paper, we introduce a novel algorithm to improve both the training speed and generalization accuracy of networks by using their over-parameterization to regularize them. In particular, we focus on neural networks that are positive-rescaling equivalent BID8, i.e. whose weights are identical up to positive scalings and matching inverse scalings. The main principle of our method, referred to as Equi-normalization (ENorm), is illustrated in Figure 1 for the fullyconnected case. We scale two consecutive matrices with rescaling coefficients that minimize the joint p norm of those two matrices. This amounts to re-parameterizing the network under the constraint of implementing the same function. We conjecture that this particular choice of rescaling coefficients ensures a smooth propagation of the gradients during training. A limitation is that our current proposal, in its current form, can only handle learned skipconnections like those proposed in type-C ResNet. For this reason, we focus on architectures, in particular ResNet18, for which the learning converges with learned skip-connection, as opposed to architectures like ResNet-50 for which identity skip-connections are required for convergence. In summary,• We introduce an iterative, batch-independent algorithm that re-parametrizes the network within the space of rescaling equivalent networks, thus preserving the function implemented by the network; • We prove that the proposed Equi-normalization algorithm converges to a unique canonical parameterization of the network that minimizes the global p norm of the weights, or equivalently, when p = 2, the weight decay regularizer; • We extend ENorm to modern convolutional architectures, including the widely used ResNets, and show that the theoretical computational overhead is lower compared to BN (×50) and even compared to GN (×3); • We show that applying one ENorm step after each SGD step outperforms both BN and GN on the CIFAR-10 (fully connected) and ImageNet (ResNet-18) datasets.• Our code is available at https://github.com/facebookresearch/enorm. The paper is organized as follows. Section 2 reviews related work. Section 3 defines our Equinormalization algorithm for fully-connected networks and proves the convergence. Section 4 shows how to adapt ENorm to convolutional neural networks (CNNs). Section 5 details how to employ ENorm for training neural networks and Section 6 presents our experimental . This section reviews methods improving neural network training and compares them with ENorm. Since there is a large body of literature in this research area, we focus on the works closest to the proposed approach. From early works, researchers have noticed the importance of normalizing the input of a learning system, and by extension the input of any layer in a DNN BID4. Such normalization is applied either to the weights or to the activations. On the other hand, several strategies aim at better controlling the geometry of the weight space with respect to the loss function. Note that these research directions are not orthogonal. For example, explicitly normalizing the activations using BN has smoothing effects on the optimization landscape BID15.Normalizing activations. Batch Normalization normalizes the activations by using statistics computed along the batch dimension. As stated in the introduction, the dependency on the batch size leads BN to underperform when small batches are used. Batch Renormalization (BR) is a follow-up that reduces the sensitivity to the batch size, yet does not completely alleviate the negative effect of small batches. Several batch-independent methods operate on other dimensions, such as Layer Normalization (channel dimension) and Instance-Normalization (sample dimension) . Parametric data-independent estimation of the mean and variance in every layer is investigated by. However, these methods are inferior to BN in standard classification tasks. More recently, Group Normalization (GN) , which divides the channels into groups and normalizes independently each group, was shown to effectively replace BN for small batch sizes on computer vision tasks. Normalizing weights. Early weight normalization techniques only served to initialize the weights before training (; a). These methods aim at keeping the variance of the output activations close to one along the whole network, but the assumptions made to derive these initialization schemes may not hold as training evolves. More recently, BID14 propose a polar-like re-parametrization of the weights to disentangle the direction from the norm of the weight vectors. Note that Weight Norm (WN) does require mean-only BN to get competitive , as well as a greedy layer-wise initialization as mentioned in the paper. Optimization landscape. Generally, in the parameter space, the loss function moves quickly along some directions and slowly along others. To account for this anisotropic relation between the parameters of the model and the loss function, natural gradient methods have been introduced . They require storing and inverting the N × N curvature matrix, where N is the number of network parameters. Several works approximate the inverse of the curvature matrix to circumvent this problem BID5 BID6. Another method called Diagonal Rescaling BID3 proposes to tune a particular reparametrization of the weights with a block-diagonal approximation of the inverse curvature matrix. Finally, BID8 propose a rescaling invariant path-wise regularizer and use it to derive Path-SGD, an approximate steepest descent with respect to the path-wise regularizer. Positioning. Unlike BN, Equi-normalization focuses on the weights and is independent of the concept of batch. Like Path-SGD, our goal is to obtain a balanced network ensuring a good backpropagation of the gradients, but our method explicitly re-balances the network using an iterative algorithm instead of using an implicit regularizer. Moreover, ENorm can be readily adapted to the convolutional case whereas BID8 restrict themselves to the fully-connected case. In addition, the theoretical computational complexity of our method is much lower than the overhead introduced by BN or GN (see Section 5). Besides, WN parametrizes the weights in a polar-like manner, w = g × v/|v|, where g is a scalar and v are the weights, thus it does not balance the network but individually scales each layer. Finally, Sinkhorn's algorithm aims at making a single matrix doubly stochastic, while we balance a product of matrices to minimize their global norm. We first define Equi-normalization in the context of simple feed forward networks that consist of two operators: linear layers and ReLUs. The algorithm is inspired by Sinkhorn-Knopp and is designed to balance the energy of a network, i.e., the p -norm of its weights, while preserving its function. When not ambiguous, we may denote by network a weight parametrization of a given network architecture. We consider a network with q linear layers, whose input is a row vector x ∈ R n0. We denote by σ the point-wise ReLU activation. For the sake of exposition, we omit a bias term at this stage. We recursively define a simple fully connected feedforward neural network with L layers by y 0 = x, DISPLAYFORM0 and y q = y q−1 W q. Each linear layer k is parametrized by a matrix W k ∈ R n k−1 ×n k. We denote by f θ (x) = y q the function implemented by the network, where θ is the concatenation of all the network parameters. We denote by D(n) the set of diagonal matrices of size n × n for which all diagonal elements are strictly positive and by I n the identity matrix of size n × n. DISPLAYFORM1 Definition 2. θ andθ are rescaling equivalent if, for all k ∈ 1, q − 1, there exists a rescaling matrix DISPLAYFORM2 with the conventions that D 0 = I n0 and D q = I nq. This amounts to positively scaling all the incoming weights and inversely scaling all the outgoing weights for every hidden neuron. Two weights vectors θ andθ that are rescaling equivalent are also functionally equivalent (see Section 3.5 for a detailed derivation). Note that a functional equivalence class is not entirely described by rescaling operations. For example, permutations of neurons inside a layer also preserve functional equivalence, but do not change the gradient. In what follows our objective is to seek a canonical parameter vector that is rescaling equivalent to a given parameter vector. The same objective under a functional equivalence constraint is beyond the scope of this paper, as there exist degenerate cases where functional equivalence does not imply rescaling equivalence, even up to permutations. Given a network f θ and p > 0, we define the p norm of its weights as p (θ) = q k=1 W k p p. We are interested in minimizing p inside an equivalence class of neural networks in order to exhibit a unique canonical element per equivalence class. We denote the rescaling coefficients within the network as DISPLAYFORM0 n, where n is the number of hidden neurons. Fixing the weights {W k}, we refer to {D −1 k−1 W k D k} as the rescaled weights, and seek to minimize their p norm as a function of the rescaling coefficients: DISPLAYFORM1 We formalize the ENorm algorithm using the framework of block coordinate descent. We denote by DISPLAYFORM0 In what follows we assume that each hidden neuron is connected to at least one input and one output neuron. ENorm generates a sequence of rescaling coefficients δ (r) obtained by the following steps. Initialization. Define δ = (1, . . ., 1). Iteration. At iteration r, consider layer ∈ 1, q − 1 such that − 1 ≡ r mod q − 1 and define DISPLAYFORM1 Denoting uv the coordinate-wise product of two vectors and u/v for the division, we have DISPLAYFORM2 DISPLAYFORM3 Algorithm and pseudo-code. Algorithm 1 gives the pseudo-code of ENorm. By convention, one ENorm cycle balances the entire network once from = 1 to = q − 1. See Appendix A for illustrations showing the effect of ENorm on network weights. We now state our main convergence for Equi-normalization. The proof relies on a coordinate descent Theorem by Tseng FORMULA1 and can be found in Appendix B.1. The main difficulty is to prove the uniqueness of the minimum of ϕ. Theorem 1. Let p > 0 and (δ (r) ) r∈N be the sequence of rescaling coefficients generated by ENorm from the starting point δ as described in Section 3.3. We assume that each hidden neuron is connected to at least one input and one output neuron. Then, Convergence. The sequence of rescaling coefficients δ (r) converges to δ * as r → +∞. As a consequence, the sequence of rescaled weights also converges; Minimum global p norm. The rescaled weights after convergence minimize the global p norm among all rescaling equivalent weights; Uniqueness. The minimum is unique, i.e. δ * does not depend on the starting point δ. In the presence of biases, the network is defined as DISPLAYFORM0 For rescaling-equivalent weights satisfying, in order to preserve the inputoutput function, we define matched rescaling equivalent biases DISPLAYFORM1 show by recurrence that for every layer k, DISPLAYFORM2 where y k (resp. y k) is the intermediary network function associated with the weights W (resp. W). In particular, y q = y q, i.e. rescaling equivalent networks are functionally equivalent. We also compute the effect of applying ENorm on the gradients in the same Appendix. Equi-normalization is easily adapted to introduce a depth-wise penalty on each layer. We consider the weighted loss p,(c1,...,cq) (θ) = q k=1 c k W k p. This amounts to modifiying the rescaling coefficients asd DISPLAYFORM0 In Section 6, we explore two natural ways of defining c k: c k = c p(q−k) (uniform) and c k = 1/(n k−1 n k) (adaptive). In the uniform setup, we penalize layers exponentially according to their depth: for instance, values of c larger than 1 increase the magnitude of the weights at the end of the network. In the adaptive setup, the loss is weighted by the size of the matrices. We now extend ENorm to CNNs, by focusing on the typical ResNet architecture. We briefly detail how we adapt ENorm to convolutional or max-pooling layers, and then how to update an elementary block with a skip-connection. We refer the reader to Appendix C for a more extensive discussion. Sanity checks of our implementation are provided in Appendix E.1. Figure 2 explains how to rescale two consecutive convolutional layers. As detailed in Appendix C, this is done by first properly reshaping the filters to 2D matrices, then performing the previously described rescaling on those matrices, and then reshaping the matrices back to convolutional filters. This matched rescaling does preserve the function implemented by the composition of the two layers, whether they are interleaved with a ReLU or not. It can be applied to any two consecutive convolutional layers with various stride and padding parameters. Note that when the kernel size is 1 in both layers, we recover the fully-connected case of Figure 1. The MaxPool layer operates per channel by computing the maximum within a fixed-size kernel. We adapt Equation FORMULA12 to the convolutional case where the rescaling matrix D k is applied to the channel dimension of the activations y k. Then, DISPLAYFORM0 Thus, the activations before and after the MaxPool layer have the same scaling and the functional equivalence is preserved when interleaving convolutional layers with MaxPool layers. We now consider an elementary block of a ResNet-18 architecture as depicted in FIG0. In order to maintain functional equivalence, we only consider ResNet architectures of type C as defined in (b), where all shortcuts are learned 1 × 1 convolutions. As detailed in Appendix C, rescaling two consecutive blocks requires (a) to define the structure of the rescaling process, i.e. where to insert the rescaling coefficients and (b) a formula for computing those rescaling coefficients. ENorm & SGD. As detailed in Algorithm 2, we balance the network periodically after updating the gradients. By design, this does not change the function implemented by the network but will yield different gradients in the next SGD iteration. Because this re-parameterization performs a jump in the parameter space, we update the momentum using Equation FORMULA1 and the same matrices D k as those used for the weights. The number of ENorm cycles after each SGD step is an hyperparameter and by default we perform one ENorm cycle after each SGD step. In Appendix D, we also explore a method to jointly learn the rescaling coefficients and the weights with SGD, and report corresponding . Computational advantage over BN and GN. TAB2 provides the number of elements (weights or activations) accessed when normalizing using various techniques. Assuming that the complexity (number of operations) of normalizing is proportional to the number of elements and assuming all techniques are equally parallelizable, we deduce that our method (ENorm) is theoretically 50 times faster than BN and 3 times faster than GN for a ResNet-18. In terms of memory, ENorm requires no extra-learnt parameters, but the number of parameters learnt by BN and GN is negligible (4800 for a ResNet-18 and 26,650 for a ResNet-50). We implemented ENorm using a tensor library; to take full advantage of the theoretical reduction in compute would require an optimized CUDA kernel....... DISPLAYFORM0 Rescaling the weights of two consecutive convolutional layers that preserves the function implemented by the CNN. Layer k scales channel number i of the input activations by γ i and layer k+1 cancels this scaling with the inverse scalar so that the activations after layer k+1 are unchanged. Block k DISPLAYFORM0 We analyze our approach by carrying out experiments on the MNIST and CIFAR-10 datasets and on the more challenging ImageNet dataset. ENorm will refer to Equi-normalization with p = 2. Training. We follow the setup of. Input data is normalized by subtracting the mean and dividing by standard deviation. The encoder has the structure FC-ReLU-FC-ReLU-FC-ReLU-FC and the decoder has the symmetric structure. We use He's initialization for the weights. We select the learning rate in {0.001, 0.01, 0.1} and decay it linearly to zero. We use a batch size of 256 and SGD with no momentum and a weight decay of 0.001. For path-SGD, our implementation closely follows the original paper BID8 and we set the weight decay to zero. For GN, we cross-validate the number of groups among {5, 10, 20, 50}. For WN, we use BN as well as a greedy layer-wise initialization as described in the original paper. Results. While ENorm alone obtains competitive compared to BN and GN, ENorm + BN outperforms all other methods, including WN + BN. Note that here ENorm refers to Enorm using the adaptive c parameter as described in Subsection 3.6, whereas for ENorm + BN we use the uniform setup with c = 1. We perform a parameter study for different values and setups of the asymmetric scaling (uniform and adaptive) in Appendix E.2. Without BN, the adaptive setup outperforms all other setups, which may be due to the strong bottleneck structure of the network. With BN, the dynamic is different and the are much less sensitive to the values of c. Results without any normalization and with Path-SGD are not displayed because of their poor performance. Training. We first experiment with a basic fully-connected architecture that takes as input the flattened image of size 3072. Input data is normalized by subtracting mean and dividing by standard deviation independently for each channel. The first linear layer is of size 3072 × 500. We then consider p layers 500 × 500, p being an architecture parameter for the sake of the analysis. The last classification is of size 500 × 10. The weights are initialized with He's scheme. We train for 60 epochs using SGD with no momentum, a batch size of 256 and weight decay of 10 −3. Cross validation is used to pick an initial learning rate in {0.0005, 0.001, 0.005, 0.01, 0.05, 0.1}. Path-SGD, GN and WN are learned as detailed in Section 6.1. All are the average test accuracies over 5 training runs. Results. ENorm alone outperforms both BN and GN for any depth of the network. ENorm + BN outperforms all other methods, including WN + BN, by a good margin for more than p = 11 intermediate layers. Note that here ENorm as well as ENorm + BN refers to ENorm in the uniform setup with c = 1.2. The of the parameter study for different values and setups of the asymmetric scaling are similar to those of the MNIST setup, see Appendix E.2. Training. We use the CIFAR-NV architecture as described by. Images are normalized by subtracting mean and dividing by standard deviation independently for each channel. During training, we use 28 × 28 random crops and randomly flip the image horizontally. At test time, we use 28 × 28 center crops. We split the train set into one training set (40,000 images) and one validation set (10,000 images). We train for 128 epochs using SGD and an initial learning rate cross-validated on a held-out set among {0.01, 0.05, 0.1}, along with a weight decay of 0.001. The learning rate is then decayed quadratically to 10 −4. We compare various batch sizes together with the use of momentum (0.9) or not. The weights are initialized with He's scheme. In order to stabilize the training, we employ a BatchNorm layer at the end of the network after the FC layer for the Baseline and ENorm cases. For GN we cross-validate the number of groups among {4, 8, 16, 32, 64}. All are the average test accuracies over 5 training runs. Table 3: CIFAR-10 fully convolutional (higher is better).Results. See Table 3. ENorm + BN outperforms all other methods, including WN + BN, by a good margin. Note that here ENorm refers to ENorm in the uniform setup with the parameter c = 1.2 whereas ENorm + BN refers to the uniform setup with c = 1. A parameter study for different values and setups of the asymmetric scaling can be found in Appendix E.2. This dataset contains 1.3M training images and 50,000 validation images split into 1000 classes. We use the ResNet-18 model with type-C learnt skip connections as described in Section 4.Training. Our experimental setup closely follows that of GN . We train on 8 GPUs and compute the batch mean and standard deviation per GPU when evaluating BN. We use the Kaiming initialization for the weights (a) and the standard data augmentation scheme of BID19. We train our models for 90 epochs using SGD with a momentum of 0.9. We adopt the linear scaling rule for the learning rate and set the initial learning rate to 0.1B/256 where the batch size B is set to 32, 64, 128, or 256. As smaller batches lead to more iterations per epoch, we adopt a similar rule and adopt a weight decay of w = 10 DISPLAYFORM0 for B = 128 and 256, w = 10 −4.5 for B = 64 and w = 10 −5 for B = 32. We decay the learning rate quadratically to 10 −5 and report the median error rate on the final 5 epochs. When using GN, we set the number of groups G to 32 and did not cross-validate this value as prior work reports little impact when varying G from 2 to 64. In order for the training to be stable and faster, we added a BatchNorm at the end of the network after the FC layer for the Baseline and ENorm cases. The batch mean and variance for this additional BN are shared across GPUs. Note that the activation size at this stage of the network is B × 1000, which is a negligible overhead (see TAB2).Results. We compare the Top 1 accuracy on a ResNet-18 when using no normalization scheme, (Baseline), when using BN, GN and ENorm (our method). In both the Baseline and ENorm settings, we add a BN at the end of the network as described in 6.3. The are reported in Table 4. The performance of BN decreases with small batches, which concurs with prior observations . Our method outperforms GN and BN for batch sizes ranging from 32 to 128. GN presents stable across batch sizes. Note that values of c different from 1 did not yield better . The training curves for a batch size of 64 are presented in FIG2. While BN and GN are faster to converge than ENorm, our method achieves better after convergence in this case. Note also that ENorm overfits the training set less than BN and GN, but more than the Baseline case. Table 4: ResNet-18 on the ImageNet dataset (test accuracy). We applied ENorm to a deeper (ResNet-50), but obtained unsatisfactory . We observed that learnt skip-connections, even initialized to identity, make it harder to train without BN, even with careful layer-wise initialization or learning rate warmup. This would require further investigation. We presented Equi-normalization, an iterative method that balances the energy of the weights of a network while preserving the function it implements. ENorm provably converges towards a unique equivalent network that minimizes the p norm of its weights and it can be applied to modern CNN architectures. Using ENorm during training adds a much smaller computational overhead than BN or GN and leads to competitive performances in the FC case as well as in the convolutional case. Discussion. While optimizing an unbalanced network is hard BID8, the criterion we optimize to derive ENorm is likely not optimal regarding convergence or training properties. These limitations suggest that further research is required in this direction. We first apply ENorm to one randomly initialized fully connected network comprising 20 intermediary layers. All the layers have a size 500 × 500 and are initialized following the Xavier scheme. The network has been artificially unbalanced as follows: all the weights of layer 6 are multiplied by a factor 1.2 and all the weights of layer 12 are multiplied by 0.8, see FIG3. We then iterate our ENorm algorithm on the network, without training it, to see that it naturally re-balances the network, see FIG4. DISPLAYFORM0. We make the following assumptions: f is differentiable on D; X is compact; for each x ∈ D, each block coordinate function f: t → f (x 1, . . ., x −1, t, x +1, . . ., x B), where 2 ≤ ≤ B − 1, has at most one minimum. Under these assumptions, the sequence (x (r) ) r∈N generated by the coordinate descent algorithm is defined and bounded. Moreover, every cluster point of (x (r) ) r∈N is a local minimizer of f. STEP 1. We apply Theorem 2 to the function ϕ. This is possible because all the assumptions are verified as shown below. Recall that DISPLAYFORM1 DISPLAYFORM2 k, where C = max(C 0, 1) and DISPLAYFORM3 • For the first hidden layer, index k = 1. By assumption, every hidden neuron is connected at least to one neuron in the input layer. Thus, for every j, there exists i such that DISPLAYFORM4 Thus d 1 ∞ < CM.• For some hidden layer, index k. By assumption, every hidden neuron is connected at least to one neuron in the previous layer. Thus, for every j, there exists i such that DISPLAYFORM5 Using the induction hypothesis, we get DISPLAYFORM6 Thus, there exists a ball B such that δ / ∈ B implies ϕ(δ) > ϕ(δ 0). Thus, δ ∈ X implies that x ∈ B and X ⊂ B is bounded. Moreover, X is closed because ϕ is continuous thus X is compact and Assumption is satisfied. has a unique minimum as shown in Section 3.3, see Equation. The existence and uniqueness of the minimum comes from the fact that each hidden neuron is connected to at least one input and one output neuron, thus all the row and column norms of the hidden weight matrices W k are non-zero, as well as the column (resp. row) norms or W 1 (resp. W q). STEP 2. We prove that ϕ has at most one stationary point on D under the assumption that each hidden neuron is connected either to an input neuron or to an output neuron, which is weaker than the general assumption of Theorem 1.We first introduce some definitions. We denote the set of all neurons in the network by V. Each neuron ν ∈ V belongs to a layer k ∈ 0, q and has an index i ∈ 1, n k in this layer. Any edge e connects some neuron i at layer k − 1 to some neuron j at layer k, e = (k, i, j). We further denote by H the set of hidden neurons ν belonging to layers q ∈ 1, q − 1. We define E as the set of edges whose weights are non-zero, i.e. DISPLAYFORM0 For each neuron ν, we define prev(ν) as the neurons connected to ν that belong to the previous layer. We now show that ϕ has at most one stationary point on D. Directly computing the gradient of ϕ and solving for zeros happens to be painful or even intractable. Thus, we define a change of variables as follows. We define h as h: (0, +∞) DISPLAYFORM1 We next define the shift operator S: DISPLAYFORM2 and the padding operator P as DISPLAYFORM3 We define the extended shift operator S H = S • P. We are now ready to define our change of variables. We define χ = ψ • S H where DISPLAYFORM4 Since h is a C ∞ diffeomorphism, its differential [Dh](δ) is invertible for any δ. It follows that [Dϕ](δ) = 0 if, and only if, [Dχ](h(δ)) = 0. As χ is the composition of a strictly convex function, ψ, and a linear injective function, S H (proof after Step 3), it is strictly convex. Thus χ has at most one stationary point, which concludes this step. STEP 3. We prove that the sequence δ (r) converges. Step 1 implies that the sequence δ (r) is bounded and has at least one cluster point, as f is continuous on the compact X.Step 2 implies that the sequence δ (r) has at most one cluster point. We then use the fact that any bounded sequence with exactly one cluster point converges to conclude the proof. S IS INJECTIVE. Let x ∈ ker S H. Let us show by induction on the hidden layer index k that for every neuron ν at layer k, x ν = 0.• k = 1. Let ν be a neuron at layer 1. Then, there exists a path coming from an input neuron to ν 0 through edge e 1. By definition, P (x) ν0 = 0 and P (x) ν = x ν, hence S H (x) e1 = x ν − 0. Since S H (x) = 0 it follows that x ν = 0.• k → k + 1. Same reasoning using the fact that x ν k = 0 by the induction hypothesis. The case where the path goes from neuron ν to some output neuron is similar. We show by induction that for every layer k, i.e., DISPLAYFORM0 where y k (resp. y k) is the intermediary network function associated with weights W (resp. W). This holds for k = 0 since D 0 = I n0 by convention. If the property holds for some k < q − 1, then by we have DISPLAYFORM1 The same equations hold for k = q − 1 without the non-linearity σ. Using the chain rule and denoting by the loss of the network, for every layer k, using, we have DISPLAYFORM2 Similarly, we obtain DISPLAYFORM3 Equation FORMULA1 will be used to update the momentum (see Section 5) and Equation for the weights. Let us consider two consecutive convolutional layers k and k + 1, without bias. Layer k has C k filters of size C k−1 × S k × S k, where C k−1 is the number of input features and S k is the kernel size. This in a weight tensor T k of size C k × C k−1 × S k × S k. Similarly, layer k + 1 has a weight matrix T k+1 of size C k+1 × C k × S k+1 × S k+1. We then perform axis-permutation and reshaping operations to obtain the following 2D matrices: DISPLAYFORM0 For example, we first reshape T k as a 2D matrix by collapsing its last 3 dimensions, then transpose it to obtain M k. We then jointly rescale those 2D matrices using rescaling matrices D k ∈ D(k) as detailed in Section 3 and perform the inverse axis permutation and reshaping operations to obtain a right-rescaled weight tensor T k and a left-rescaled weight tensor T k+1. See Figure 2 for an illustration of the procedure. This matched rescaling does preserve the function implemented by the composition of the two layers, whether they are interleaved with a ReLU or not. It can be applied to any two consecutive convolutional layers with various stride and padding parameters. Note that when the kernel size is 1 in both layers, we recover the fully-connected case of Figure 1. We now consider an elementary block of a ResNet-18 architecture as depicted in FIG0. In order to maintain functional equivalence, we only consider ResNet architectures of type C as defined in (b), where all shortcuts are learned 1 × 1 convolutions. Structure of the rescaling process. Let us consider a ResNet block k. We first left-rescale the Conv1 and ConvSkip weights using the rescaling coefficients calculated between blocks k − 1 and k. We then rescale the two consecutive layers Conv1 and Conv2 with their own specific rescaling coefficients, and finally right-rescale the Conv2 and ConvSkip weights using the rescaling coefficients calculated between blocks k and k + 1.Computation of the rescaling coefficients. Two types of rescaling coefficients are involved, namely those between two convolution layers inside the same block and those between two blocks. The rescaling coefficients between the Conv1 and Conv2 layers are calculated as explained in Section 4.1. Then, in order to calculate the rescaling coefficients between two blocks, we compute equivalent block weights to deduce rescaling coefficients. We empirically explored some methods to compute the equivalent weight of a block using electrical network analogies. The most accurate method we found is to compute the equivalent weight of the Conv1 and Conv2 layers, i.e., to express the succession of two convolution layers as only one convolution layer denoted as ConvEquiv (series equivalent weight), and in turn to express the two remaining parallel layers ConvEquiv and ConvSkip again as a single convolution layer (parallel equivalent weight). It is not possible to obtain series of equivalent weights, in particular when the convolution layers are interleaved with ReLUs. Therefore, we approximate the equivalent weight as the parallel equivalent weight of the Conv1 and ConvSkip layers. In Section 3, we defined an iterative algorithm that minimizes the global p norm of the network DISPLAYFORM0 As detailed in Algorithm 2, we perform alternative SGD and ENorm steps during training. We now derive an implicit formulation of this algorithm that we call Implicit Equi-normalization. Let us fix p = 2. We denote by C(f θ (x), y) the cross-entropy loss for the training sample (x, y) and by 2 (θ, δ) the weight decay regularizer. The loss function of the network writes DISPLAYFORM1 where λ is a regularization parameter. We now consider both the weights and the rescaling coefficients as learnable parameters and we rely on automatic differentiation packages to compute the derivatives of L with respect to the weights and to the rescaling coefficients. We then simply train the network by performing iterative SGD steps and updating all the learnt parameters. Note that by design, the derivative of C with respect to any rescaling coefficient is zero. Although the additional overhead of implicit ENorm is theoretically negligible, we observed an increase of the training time of a ResNet-18 by roughly 30% using PyTorch 4.0 BID11. We refer to Implicit Equi-normalization as ENorm-Impl and to Explicit Equi-normalization as ENorm. We performed early experiments for the CIFAR10 fully-connected case. ENorm-Impl performs generally better than the baseline but does not outperform explicit ENorm, in particular when the network is deep. We follow the same experimental setup than previously, except that we additionally cross-validated λ. We also initialize all the rescaling coefficients to one.. Recall that ENorm or ENorm denotes explicit Equi-normalization while ENorm-Impl denotes Implicit Equinormalization. We did not investigate learning the weights and the rescaling coefficients at different speeds (e.g. with different learning rates or momentum). This may explain in part why ENorm-Impl generally underperforms ENorm in those early experiments. We perform sanity checks to verify our implementation and give additional . We apply our Equi-normalization algorithm to a ResNet architecture by integrating all the methods exposed in Section 4. We perform three sanity checks before proceeding to experiments. First, we randomly initialize a ResNet-18 and verify that it outputs the same probabilities before and after balancing. Second, we randomly initialize a ResNet-18 and perform successive ENorm cycles (without any training) and observe that the 2 norm of the weights in the network is decreasing and then converging, as theoretically proven in Section 3, see FIG5. We finally compare the evolution of the total 2 norm of the network when training it, with or without ENorm. We use the setup described in Subsection 6.2 and use p = 3 intermediary layers. The are presented in FIG6. ENorm consistently leads to a lower energy level in the network. MNIST auto-encoder. For the uniform setup, we test for three different values of c, without BN: c = 1 (uniform setup), c = 0.8 (uniform setup), c = 1.2 (uniform setup). We also test the adaptive setup. The adaptive setup outperforms all other choices, which may be due to the strong bottleneck structure of the network. With BN, the dynamics are different and the are much less sensitive to the values of c (see FIG7). For the uniform setup, we test for three different values of c, without BN: c = 1 (uniform setup), c = 0.8 (uniform setup), c = 1.2 (uniform setup). We also test the adaptive setup (see TAB8). Once again, the dynamics with or without BN are quite different. With or without BN, c = 1.2 performs the best, which may be linked to the fact that the ReLUs are cutting energy during each forward pass. With BN, the are less sensitive to the values of c.
Fast iterative algorithm to balance the energy of a network while staying in the same functional equivalence class
937
scitldr
Reinforcement learning agents are typically trained and evaluated according to their performance averaged over some distribution of environment settings. But does the distribution over environment settings contain important biases, and do these lead to agents that fail in certain cases despite high average-case performance? In this work, we consider worst-case analysis of agents over environment settings in order to detect whether there are directions in which agents may have failed to generalize. Specifically, we consider a 3D first-person task where agents must navigate procedurally generated mazes, and where reinforcement learning agents have recently achieved human-level average-case performance. By optimizing over the structure of mazes, we find that agents can suffer from catastrophic failures, failing to find the goal even on surprisingly simple mazes, despite their impressive average-case performance. Additionally, we find that these failures transfer between different agents and even significantly different architectures. We believe our findings highlight an important role for worst-case analysis in identifying whether there are directions in which agents have failed to generalize. Our hope is that the ability to automatically identify failures of generalization will facilitate development of more general and robust agents. To this end, we report initial on enriching training with settings causing failure. Reinforcement Learning (RL) methods have achieved great success over the past few years, achieving human-level performance on a range of tasks such as Atari BID17, Go BID20, Labyrinth, and Capture the Flag BID13.On these tasks, and more generally in reinforcement learning, agents are typically trained and evaluated using their average reward over environment settings as the measure of performance, i.e. E P (e) [R(π(θ), e)], where π(θ) denotes a policy with parameters θ, R denotes the total reward the policy receives over the course of an episode, and e denotes environment settings such as maze structure in a navigation task, appearance of objects in the environment, or even the physical rules governing environment dynamics. But what biases does the distribution P (e) contain, and what biases, or failures to generalize, do these induce in the strategies agents learn?To help uncover biases in the training distribution and in the strategies that agents learn, we propose evaluating the worst-case performance of agents over environment settings, i.e.min DISPLAYFORM0 where E is some set of possible environment settings. Worst-case analysis can provide an important tool for understanding robustness and generalization in RL agents. For example, it can help us with:• Understanding biases in training Catastrophic failures can help reveal situations that are rare enough during training that the agent does not learn a strategy that is general enough to cope with them. Frames from top left to bottom right correspond to agent observations as it takes the path from spawn to goal. Note that while the navigation task may look simple given a top down view, the agent only receives very partial information about the maze at every step, making navigation a difficult task.• Robustness For critical systems, one would want to eliminate, or at least greatly reduce, the probability of extreme failures.• Limiting exploitability If agents have learned strategies that fail to generalize to particular environment settings, then an adversary could try and exploit an agent by trying to engineer such environment settings leading to agent failure. In this work, we use worst-case analysis to investigate the performance of a state-of-the-art agent in solving a first-person 3D navigation task; a task on which agents have recently achieved average-case human-level performance BID23. By optimizing mazes to minimize the performance of agents, we discover the existence of mazes where agents repeatedly fail to find the goal (which we refer to as a Catastrophic Failure).Our Contributions To summarize, the key contributions of this paper are as follows:1. We introduce an effective and intuitive approach for finding simple environment settings leading to failure (Section 2).2. We show that state-of-the-art agents carrying out navigation tasks suffer from drastic and often surprising failure cases (Sections 3.1 and 3.2).3. We demonstrate that mazes leading to failure transfer across agents with different hyperparameters and, notably, even different architectures (Section 3.3).4. We present an initial investigation into how the training distribution can be adapted by incorporating adversarial and out-of-distribution examples (Section 4). Tasks We consider agents carrying out first-person 3D navigation tasks. Navigation is of central importance in RL research as it captures the challenges posed by partially observable Markov decision processes (POMDPs). The navigation tasks we use are implemented in DeepMind Lab (DM Lab) BID1. 1 Each episode is played on a 15 × 15 maze where each position in the maze may contain a wall, an agent spawn point, or a goal spawn point. The maze itself is procedurally generated every episode, along with the goal and agent spawn locations. The goal location remains fixed throughout an episode, while the agent spawn location can vary. In training, the agent respawns at different locations each time they reach the goal, while for our optimization and analysis we limit the agent to the same spawn location. Agents receive RGB observations of size 96 × 72 pixels, examples of which are provided in FIG0. Episodes last for 120 seconds and are played at a framerate of 15 frames per second. The agent receives a positive reward of 10 every time it reaches the goal, and 0 otherwise. On this specific navigation task, RL agents have recently achieved human-level average-case performance BID23.Agents We perform our analysis on Importance Weighted Actor-Learner Architecture agents trained to achieve human-level average-case performance on navigation tasks. These agents can be described as async batched-a2c agents with the V-trace algorithm for off policy-correction, and we henceforth refer to these as A2CV agents. Details of the training procedure are provided in Appendix A.1.Search Algorithm If we are interested in worst-case performance of agents, how can we find environment settings leading to the worst performance? In supervised learning, one typically uses gradient based methods to find inputs that lead to undesired output BID2 BID21 BID9. In contrast, we search for environment settings leading to an undesired outcome at the end of an episode. This presents a challenge as the environment rendering and MDP are not differentiable. We are therefore limited to black-box methods where we can only query agent performance given environment settings. To search for environment settings which cause catastrophic failures, we propose the local search procedure described in Algorithm 1 (visualizing the process in FIG1). Concretely, we generate a set of initial candidate mazes by sampling mazes from the training distribution. We then use the Modify function on the maze which yielded the lowest agent score to randomly move two walls to produce a new set of candidates, rejecting wall moves that lead to unsolvable mazes. Importantly, this method is able to effectively find catastrophic failure cases (as we demonstrate in Section 3.1), while also having the advantage of being intuitive to understand and implement. Example of search procedure. First, we generate a set of 10 initial candidate mazes by sampling from the training distribution. We then Evaluate each with the agent over 30 episodes, select the best maze (i.e. lowest agent score), and Modify this maze by randomly moving two walls to form the next set of candidates (Iteration 1). This process is repeated for 20 iterations, leading to a maze where the agent score is 0.09 in this example (i.e. the agent finds the goal once in 11 episodes). In Appendix A.2.1 we detail the computational requirements of this search procedure. The agents we study achieve impressive average-case performance, but how much does their worstcase performance differ from their average-case performance? To investigate this, we consider the worst-case performance over a large set of mazes, including mazes that are not possible under the training distribution. A natural question to ask is whether any departure from the wall structure present during training will lead to agent failure. To test this, we evaluate the agent on samples from a distribution of mazes containing all mazes agents could be evaluated on during the search. In particular, we randomly select agent and goal spawn locations in the first step and then randomly move 40 walls, corresponding to the same actions taken by our optimization procedure, but where the actions are chosen randomly rather than in order to minimize agent performance. We find that agents do generalize to random mazes from the set we consider. In fact, we find that the average score obtained by agents on randomly perturbed mazes is slightly higher than on the training distribution, with agents obtaining an average of 45 goal reaches per two minute episode. The increased performance is likely due to the agent spawn location being fixed, making it easier for the agent to return to the goal once found. The considered agents generalize in the sense that agent performance is not reduced on average by out-of-distribution wall structure. But what about the worst case over all wall structures? Have the agents learned a general navigation strategy that works for all solvable mazes? Or do there exist environment settings that lead to catastrophic failures with high probability? In this section, we investigate these questions. We define a Catastrophic Failure to be an agent failing to find the goal in a two minute episode (1800 steps). As detailed below, we find that not only do there exist mazes leading to catastrophic failure, there exist surprisingly simple mazes that lead to catastrophic failure for agents yet are consistently and often rapidly solved by humans. Do environment settings leading to catastrophic failure exist for the agents we are considering? By searching over mazes using the procedure outlined in Algorithm 1, we find mazes where agents fail to find the goal on many episodes, only finding the goal 10% of the time. In fact, some individual mazes lead to failure across five different agents we tested, with even the best performing agent only finding the goal in 20% of the episodes.(a) Average number of goals reached per episode over the course of the optimization.(b) Probability of the agent reaching the goal in an episode. Figure 3: The search algorithm is able to rapidly find mazes where agents fail to find the goal. (a) The objective used for the optimizer is average agent score. The dashed line corresponds to average goals reached on randomly perturbed mazes. (b) Minimizing score also leads to a low probability of at least one goal retrieval in an episode. The dashed line corresponds to average probability of reaching a goal on randomly perturbed mazes. The blue lines are computed by averaging across 50 optimization runs. Optimization curves for our search procedure are given in Figure 3. Note that while we define catastrophic failure as failure to find the goal, the actual objective used for the optimization was average number of goal reaches during an episode. Using average number of goals gives a stronger signal at the start of the optimization process. Finding mazes leading to lower average number of captures is easier than finding mazes where the agent rarely finds the goal even once. As can be seen, despite finding the goal on average 45 times per episode on randomly perturbed mazes, on mazes optimized to reduce score, agents find the goal on average only 0.33 times per episode, more than a 100× decrease in performance. In terms of probability of catastrophic failure, we note that despite agents finding the goal in approximately 98% of episodes on randomly perturbed mazes, using our method, on average we find mazes where agents only finds the goal in 30% of the episodes. Example trajectories agents take during failures are visualized in FIG3. The trajectories often seem to demonstrate a failure to use memory to efficiently explore the maze with the agent repeatedly visiting the same locations multiple times. The mazes presented in FIG3 appear to be of higher complexity than mazes seen during training. This suggests that to obtain agents that truly master navigation, more complex mazes should be included in the training distribution. However, we can ask whether it is only more complex mazes that lead to catastrophic failure or whether there are also simple mazes leading to catastrophic failure. This is a question we explore in the next subsection. While the existence of catastrophic failures may be intriguing and perhaps troubling, one might suspect that the failures are caused by the increased complexity of the mazes leading to failure relative to the mazes the agent is exposed to during training, e.g., the mazes leading to failure contain more dead ends and sometimes have lower visibility. Further, understanding the cause of failure in such mazes seems challenging due to the large number of wall structures that may be causing the agent to fail. In this section, we explore whether there exist simple mazes which lead to catastrophic failures. As our measure of complexity, we use the total number of walls in the maze. We also evaluate humans on such mazes to get a quantitative measure of maze complexity. To find simple mazes which lead to failure, we first follow the same procedure as in the previous section, producing a set of mazes which all lead to catastrophic failures (i.e. a low agent scores). Next, we use this set of mazes as the initial set of candidates in our search algorithm, however we now use a Modify function that removes a single randomly chosen wall each iteration. This process is repeated for 70 iterations, searching for a maze with few walls while maintaining low agent score. In FIG3, we present the ing simple mazes and the corresponding agent trajectories from our optimization procedure. Interestingly, we find that one can remove a majority of the walls in a maze and still maintain the catastrophic failure (i.e. very low agent score). Of note is that a number of these mazes are strikingly simple, suggesting that there exists structure in the environment that the agent has not generalized to. For example, we can see that placing the goal in a small room in an otherwise open maze can significantly reduce the agent's ability to find the goal. Human baselines While these simple maps may lead to catastrophic failure, it is unclear whether this is because of the agent or whether the maze is difficult in a way that is not obvious. To investigate this, we perform human experiments by tasking humans to play on a set of 10 simplified mazes. Notably, we find that human players are able to always locate the goal in every maze and typically within one third of the full episode length. This demonstrates that the mazes are comfortably solvable within the course of an episode by players with a general navigation strategy. We provide a detailed comparison of agent and human performance in Appendix A.3.Analysis One question that may arise is the extent to which these mazes are isolated points in the space of mazes. That is, if the maze was changed slightly, would it no longer lead to catastrophic failure? To test this, we investigate how sensitive our discovered failure mazes are with respect to the agent and goal spawn locations on simplified adversarial mazes. As can be seen in FIG5, we find that for a large range of spawn locations, the mazes still lead to failure. This indicates that there is specific local maze structure which causes agents to fail. Procedures for finding such simple mazes may prove useful as a tool for debugging agents and understanding the ways in which training has led them to develop narrow strategies that are good enough for achieving high average-case performance. We have found failure cases for individual agents, but to what extent do these failure cases highlight a specific peculiarity of the individual agent versus a more general failure of a certain class of agents, or even a shortcoming of the distribution used for training? In this section, we investigate whether mazes which cause one trained agent to fail also cause other agents to fail. We consider two types of transfer: between different hyperparameters of the same model architecture, and between different model architectures. To test transfer between agents of the same architecture, we train a set of five A2CV agents with different entropy costs and learning rates. To test transfer between agents with significantly different architectures, we train a set of five MERLINbased agents BID23. These agents have a number of differences to the A2CV agents, most notably they contain a sophisticated memory structure based on a DNC (but with a fixed write location per timestep) BID10. Both agents are trained on the same distribution and achieve human-level averages scores on the navigation task (with MERLIN scoring 10% higher than A2CV on average). Further details of agent training can be found in Appendix A.1.To quantify the level of transfer between (sets of) agents, we follow the procedure for finding adversarial mazes outlined in Section 3.1 to produce a collection of 50 unique failure mazes for each agent (i.e. 10 collections of 50 mazes each). We then evaluate every agent 100 times on each maze in each collection, reporting their average performance on each collection. Complete quantitative transfer can be found in Appendix A.4.Failure cases transfer somewhat across all agents First, we find that across all agents, some level of transfer exists. In particular, as can be seen in Figure 6, the probability of one agent finding the goal on mazes generated to reduce the score another agent is significantly below 1. This suggests a common cause of failure that is some combination of the distribution of environment settings used during training and the set of methods that are currently used to train such agents. A possible way to address this could be enriching the training distribution so that it contains fewer biases and encourages more general solutions. Transfer within agent type is stronger than between agent type Comparing the performance of each agent type on mazes from the same agent type to mazes from another agent type, we see that transfer within agent type is stronger. As shown in Figure 6b, performance increases as we go from'MERLIN to MERLIN' to'A2CV to MERLIN' (0.42 to 0.58) and also if we go from'A2CV to A2CV' to'MERLIN to A2CV' (0.63 to 0.70). This suggests that there are some common biases in agents that are due to their architecture type. Analyzing structural differences between mazes that lead to one agent type to fail but not another could give interesting insight into behavioural differences between agents beyond just average performance. A2CV agents are less susceptible to transfer Despite similar probabilities of failure when evaluating on mazes optimized for the same agent, A2CV agents seem to suffer less on mazes optimized using other A2CV or MERLIN agents. This indicates that A2CV agents may have learned a more diverse set of strategies. Figure 6: Mazes that lead to failures in one agent lead to failure in other agents as well. This is the case for agents of the same architecture with different hyperparameters, and is also the case for transfer across agents of different architecture. We note, however, that transfer across agents with different architectures is weaker than among agents with the same architecture, and that the performance of agents with the same architecture but with different hyperparameters is slightly higher than for the agents used to originally find the mazes. From our experiments so far, we have discovered that there exist many mazes which lead to catastrophic failure. In this section, we investigate whether agent performance can be improved by adapting the training distribution, for example by incorporating adversarial mazes into training and modifying the original mazes used in training. To better understand what may be causing catastrophic failures, with the aim of fixing them, we compare the set of adversarial mazes with the original set of mazes used in training. From this comparison, we find that there are two notable differences. The probability of a catastrophic failure correlates with the distance between the spawn locations and how hidden the goal is First, we find that a number of features are more common in adversarial mazes than non-adversarial mazes. In particular, adversarial mazes are more likely to have the goal hidden in an enclosed space (such as a small room), and on average the path length from the player's starting location to the goal is significantly longer (31.1 ± 8.4 compared to 11.6 ± 6.3). Notably, while the training distribution also contains hidden goals which are far from the agent's starting location, they are much rarer. Adversarial mazes are typically far from the training distribution Second, we find that adversarial mazes tend to not only be out-of-distribution, but also far from the training distribution due to the Modify function used in our adversarial search procedure (for example, see FIG1). This contrasts with the adversarial images literature where attacks are usually constrained to be small or imperceptible. It may therefore not be surprising that the agent is unable to generalize to all out-of-distribution mazes which could also explain the significant reduction in their performance. Given these two observations, it is natural to ask whether the training distribution can be adapted to improve the agent's performance. In the following sections we investigate this question and discuss our findings, focusing on incorporating adversarial mazes into training and modifying the original mazes used in training. We consider two distinct approaches for incorporating adversarial and out-of-distribution mazes into the training distribution. Adversarial training To add adversarial mazes into the training distribution, we first create a dataset of 6000 unique adversarial mazes from separate runs of our search procedure using the previously trained A2CV agents. Notably, this set also includes the 250 mazes used in our transfer experiments (Section 3.3). Next, we train a new set of A2CV agents using both this adversarial set of mazes and the standard distribution of mazes, sampling randomly every episode (i.e. 50% of training episodes are on an adversarial maze).Randomly perturbed training To ensure our adversarial search procedure produces indistribution adversarial mazes, we alter the default maze generator used in training so that any adversarial maze can be generated. We accomplish this by randomly perturbing the original mazes, repeatedly using the same Modify function used by our adversarial search procedure, but selecting candidates randomly rather than by worst agent performance. In this section, we report our findings on the robustness of agents trained using the approaches above. Catastrophic failures still exist Our main finding is that while agents learn to perform well on the richer distributions of mazes described above, this does not lead to robust agents. In particular, agents trained on a distribution of mazes enriched with 6000 adversarial mazes were able to find the goal on average 89.8% of the time on the adversarial mazes they were trained on. Similarly, agents trained on randomly perturbed mazes were able to find the goal close to 100% of the time on the distribution they were trained on. However, despite the agents being trained on these richer training distributions, the same search method is still able to find mazes leading to extreme failure as can be seen in FIG8.One possible explanation for this is that the 6000 adversarial mazes used for training were insufficient to get good coverage of the space of mazes, and that further enlarging this set could yield qualitatively different . Indeed, for agents trained using randomly perturbed mazes, the search procedure took 50 iterations to obtain the same level of failure as it did after 20 iterations when applied to agents trained on the standard training distribution. This suggests that perhaps enriching the training distribution with a very large set of adversarial mazes may lead to more general and robust agents. However, there are a number of challenges that need to be addressed before this approach can be tested which we will describe in the next section. Our suggest that if a richer training distribution is to yield more robust agents, we may need to use a very large set of environment settings leading to failure. This is similar to how adversarial training in supervised learning is performed where more adversarial examples are used than the original training examples. We describe below what we see as two significant challenges that need to be overcome before such an approach can be thoroughly evaluated in the RL setting. Expensive generation The cost of generating a single adversarial setting is on the order of 1000's episodes using the method in this work. This implies that generating a set of adversarial settings which is similar in size to the set trained on would require orders of magnitude more computational than training itself. This could be addressed with faster methods for generating adversarial settings. Expensive training Since agents receive very little reward in adversarial settings, the training signal is incredibly sparse. Therefore, it is possible that many more training iterations are necessary for agents to learn to perform well in each adversarial setting. A possible solution to this challenge is to design a curriculum over adversity, whereby easier variants of the adversarial settings are injected into the training distribution. For example, for the navigation tasks considered here, one could include training settings with challenging mazes where the goal is in any position on the shortest path between the starting location of the agent and the challenging goal. We hope that these challenges can be overcome so that, in the context of RL, the utility of adversarial retraining can be established -an approach which has proved useful in supervised learning tasks. However, since significant challenges remain, we suspect that much effort and many pieces of work will be required before a conclusive answer is achieved. Navigation Recently, there has been significant focus in the RL community on agent navigation in simulated 3D environments, including a community-wide challenge for agents in such environments called VizDoom BID14. Such 3D first-person navigation tasks are particularly interesting because they capture challenges such as partial observability, and require the agent to "effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions where to go and how to act." BID14. Recent advances have led to impressive human-level performance on navigation tasks in large procedurally generated environments BID1 BID23.Adversarial examples in supervised learning Our work can be seen as an RL navigation analogue of work on adversarial attacks on supervised learning systems for image classification BID21. For adversarial attacks on image classifiers, one considers a set of inputs that is larger than the original distribution, but where one would hope that systems perform just as well on L ∞ balls around inputs from the distribution. In particular, the adversarial examples lie outside the training distribution. Analogously, we consider a set of mazes which is larger than the original set of mazes used during training, but where we would hope our system will work just as well. Notably, while similar on a conceptual level, our setting has two key differences from this previous line of work: The attack vector consists of changing latent semantic features of the environment (i.e. the wall structure of a maze), rather than changing individual pixels in an input image in an unconstrained manner. The failure is realized over multiple steps of agent and environment interacting with each other, rather than simply being errant output from a single forward pass through a neural net. More recently, in the context of supervised learning for image classification, there has been work to find constrained adversarial attacks which is closer to what we consider in this work BID0 BID8 BID7 BID19.In the context of interpretable adversarial examples in image classification, similar approaches to our simplification approach have been explored where one searches for adversarial perturbations with group-sparse structure or other minimal structure BID24 BID3. Additionally, our findings regarding transfer are consistent with findings on adversarial examples for computer vision networks where it has been found that perturbations that are adversarial for one network often transfer across other networks BID21 BID22 Input attacks on RL systems There have been a number of previous works which have extended adversarial attacks to RL settings, however they have achieved this by manipulating inputs directly, which effectively amounts to changing the environment renderer BID12 BID15 b). While these are interesting from a security perspective, it is less clear what they tell us about the generality of the strategy learned by the agent. Generalization in RL systems Recently, it has been shown that simple agents trained on restricted datasets fail to learn sufficiently general navigation strategies to improve goal retrieval times on held out mazes BID4. In comparison, our method is both automatic and able to find more spectacular failures. Further, our findings highlight failures in exploration during navigation. This is in contrast to this previous work which studied failures to exploit knowledge from previous goal retrievals in the same episode. In the context of testing generalization in RL, previous work has looked at statistical generalization in RL. Here we consider agents that already generalize in the statistical sense and try to better characterize the ways in which they generalize beyond the average-case. In this work, we have shown that despite the strong average-case performance often reported of RL agents, worst-case analysis can uncover environment settings which agents have failed to generalize to. Notably, we have found that not only do catastrophic failures exist, but also that simple catastrophic failures exist which we would hope agents would have generalized to, and that failures also transfer between agents and architectures. As agents are trained to perform increasingly complicated tasks in more sophisticated environments, for example AirSim BID18 and CARLA BID5, it is of interest to understand their worst-case performance and modes of generalization. Further, in real world applications such as self-driving cars, industrial control, and robotics, searching over environment settings to investigate and address such behaviours is likely to be critical on the path to robust and generalizable agents. To conclude, while this work has focused mostly on evaluation and understanding, it is only a first step towards the true goal of building more robust, general agents. Initial we report indicate that enriching the training distribution with settings leading to failure may need to be done at a large scale if it is to work, which introduces significant challenges. While training robust agents is likely an endeavour requiring significant effort, we believe it is important if agents are to carry out critical tasks and on the path to finding more generally intelligent agents. In this section we describe how the agents used in this work were trained. A.1.1 A2CVThe A2CV agents in this paper are trained as in, but with a few modifications. We note that performance of the agents presented here is higher than that published in. The differences in our training procedure were as follows:• We train the agents for 10 billion steps as opposed to 333 million steps as in.• We use a simplified action set as in BID11.• We clip rewards to [−1, 1].The main cause of higher performance seems to be the approximately 30x increase in the number of training steps. Indeed, at 333 million steps, the agents trained here obtain a similar score to the agents in. After training, the agents all achieved average rewards between 310 and 320 corresponding to finding the goal on average between 31 and 32 times per episode. The agent model was a simplified variant of the model presented in BID23, originally built to reduce training time in multi-task training scenarios. Specifically, the stochastic latent variable model was removed. This involved removing the prior network and directly producing a deterministic state representation using the same multi-layer perceptron as the posterior network in BID23; however, instead of producing a Gaussian distribution and sampling, the state representation was a deterministic transformation z t = f (e t, h t−1, m t−1) as a function of the recurrent controller state and the read vectors retrieved at the previous time step from the external memory system. Additionally, the policy network was a purely feedforward multi-layer perceptron with one hidden layer of 200 units and a tanh nonlinearity computing the multinomial action distribution, also conditioned on the state representation z t, recurrent state h t, and memory reads m t at the current time step: π(a t |z t, h t, m t). The policy loss was the same as for the A2CV agent. After training, the agents all achieved average rewards between 340 and 360 corresponding to finding the goal on average between 34 and 36 times per episode. A.2.1 COMPUTATIONAL REQUIREMENTS As described in FIG1, our search algorithm is ran using 10 candidate mazes per iteration, each evaluated 30 times, across 20 iterations. This is a total of 6000 episodes for the entire search procedure, and all episodes within one iteration can be evaluated in parallel (i.e. 20 batches of 300 episodes). In our experiments with 30 evaluations per maze, the entire search procedure took 30 minutes to complete, and only 9 minutes on average to find an adversarial maze where the probability of the agent finding the goal was below 50%. We also found reducing the number of evaluations per maze from 30 to 10 produced similar and led to a 3x reduction in resources. Our search procedure took around 30 minutes using 200 parallel workers each requiring approximately 2 CPUs. In contrast, agents were trained using 150 parallel workers each also requiring approximately 2 CPUs and taking 4 days. In Figure 3 (Section 3.1), we report the average performance of 50 independent optimization runs (i.e. 50 different initializations of our search algorithm). In 44/50 (88%) of these runs, our search algorithm was able to find at least one adversarial maze where the agent's probability of finding the goal was <50% (compared to 98% on the average maze). Furthermore, the 25th, 50th, and 75th percentiles were as follows:• p(reaching the goal): 0.031, 0.136, 0.279• number of goals reached: 0.042, 0.136, 0.368 To upper bound the intrinsic difficulty of the mazes found to be adversarial to agents, we conducted experiments where three humans played on the same mazes. Each human played a single episode on each of ten mazes. The humans played at the same resolution as agents, 96x72 pixels, to rule out visual acuity as a confounding factor. On all mazes, all humans successfully found the goal in the course of the episode. In fact, in most episodes, humans were able to find the goal in less than a third of the episode. For each maze, the agent that performed best found the goal less than 50% of the time. In contrast, humans always found the goal, usually within less than a third of the episode. Note that humans played at the same resolution as agents, 96x72 pixels. In this section we provide detailed for our transfer experiments. In particular, we detail transfer between all pairs among the 10 agents, five A2CV agents and five MERLIN agents trained with different entropy costs and learning rates.
We find environment settings in which SOTA agents trained on navigation tasks display extreme failures suggesting failures in generalization.
938
scitldr
In some misspecified settings, the posterior distribution in Bayesian statistics may lead to inconsistent estimates. To fix this issue, it has been suggested to replace the likelihood by a pseudo-likelihood, that is the exponential of a loss function enjoying suitable robustness properties. In this paper, we build a pseudo-likelihood based on the Maximum Mean Discrepancy, defined via an embedding of probability distributions into a reproducing kernel Hilbert space. We show that this MMD-Bayes posterior is consistent and robust to model misspecification. As the posterior obtained in this way might be intractable, we also prove that reasonable variational approximations of this posterior enjoy the same properties. We provide details on a stochastic gradient algorithm to compute these variational approximations. Numerical simulations indeed suggest that our estimator is more robust to misspecification than the ones based on the likelihood. Bayesian methods are very popular in statistics and machine learning as they provide a natural way to model uncertainty. Some subjective prior distribution π is updated using the negative log-likelihood n via Bayes' rule to give the posterior π n (θ) ∝ π(θ) exp(− n (θ)). Nevertheless, the classical Bayesian methodology is not robust to model misspecification. There are many cases where the posterior is not consistent (; Grünwald and), and there is a need to develop methodologies yielding robust estimates. A way to fix this problem is to replace the log-likelihood n by a relevant risk measure. This idea is at the core of the PAC-Bayes theory and Gibbs posteriors ; its connection with Bayesian principles are discussed in. builds a general representation of Bayesian inference in the spirit of and extends the representation to the approximate inference case. In particular, the use of a robust divergence has been shown to provide an estimator that is robust to misspecification . For instance, investigated the case of Hellinger-based divergences, , , and used robust β-and γ-divergences, while , Baraud and Birgé and replaced the logarithm of the log-likelihood by wisely chosen bounded functions. Refer to for a complete survey on robust divergence-based Bayes inference. In this paper, we consider the Maximum Mean Discrepancy (MMD) as the alternative loss used in Bayes' formula, leading to a pseudo-posterior that we shall call MMD-Bayes in the following. MMD is built upon an embedding of distributions into a reproducing kernel Hilbert space (RKHS) that generalizes the original feature map to probability measures, and allows to apply tools from kernel methods in parametric estimation. Our MMD-Bayes posterior is related to the kernel-based posteriors in , and , even though it is different. More recently, introduced a frequentist minimum distance estimator based on the MMD distance, that is shown to be consistent and robust to small deviations from the model. We show that our MMD-Bayes retains the same properties, i.e is consistent at the minimax optimal rate of convergence as the minimum MMD estimator, and is also robust to misspecification, including data contamination and outliers. Moreover, we show that these guarantees are still valid when considering a tractable approximation of the MMD-Bayes via variational inference, and we support our theoretical with experiments showing that our approximation is robust to outliers for various estimation problems. All the proofs are deferred to the appendix. Let us introduce the and theoretical tools required to understand the rest of the paper. We consider in a measurable space X, X a collection of n independent and identically distributed (i.i.d) random variables X 1,..., X n ∼ P 0 where P 0 is the generating distribution. We index a statistical model {P θ /θ ∈ Θ} by a parameter space Θ, without necessarily assuming that the true distribution P 0 belongs to the model. Let us consider some integrally strictly positive definite kernel k 1 bounded by a positive constant, say 1. We then denote the associated RKHS (H k, ·, · H k) satisfying the reproducing property f (x) = f, k(x, ·) H k for any f ∈ H k and any x ∈ X. We define the notion of kernel mean embedding, a Hilbert space embedding that maps probability distributions into the RKHS H k. Given a distribution P, the kernel mean embedding µ P ∈ H k is Then we define the MMD between two probability distributions P and Q simply as the distance in H k between their kernel mean embeddings: Under the assumptions we made on the kernel, the kernel mean embedding is injective and the maximum mean discrepancy is a metric, see. We motivate the use of MMD as a robust metric in Appendix D. In this paper, we adopt a Bayesian approach. We introduce a prior distribution π over the parameter space Θ equipped with some sigma-algebra. Then we define our pseudo-Bayesian distribution π β n given a prior π on Θ: δ X i is the empirical measure and β > 0 is a temperature parameter. In this section, we show that the MMD-Bayes is consistent when the true distribution belongs to the model, and is robust to misspecification. To obtain the concentration of posterior distributions in models that contain the generating distribution, introduced the so-called prior mass condition that requires the prior to put enough mass to some neighborhood (in Kullback-Leibler divergence) of the true distribution. This condition was widely studied since then for more general pseudo-posterior distributions (; ; Chérief-). Unfortunately, this prior mass condition is (by definition) restricted to cases when the model is well-specified or at least when the true distribution is in a very close neighborhood of the model. We formulate here a robust version of the prior mass condition which is based on a neighborhood of an approximation θ * of the true parameter instead of the true parameter itself. The following condition is suited to the MMD metric, recovers the usual prior mass condition when the model is well-specified and still makes sense in misspecified cases with potentially large deviations to the model assumptions: Prior mass condition: Let us denote θ * = arg min θ∈Θ D k (P θ, P 0) and its neighborhood B n = {θ ∈ Θ/D k (P θ, P θ *) ≤ n −1/2 }. Then (π, β) is said to satisfy the prior mass condition C(π, β) when π(B n) ≥ e −β/n. In the usual Bayesian setting, the computation of the prior mass is a major difficulty , and it can be hard to know whether the prior mass condition is satisfied or not. Nevertheless, here the condition does not only hold on the prior distribution π but also on the temperature parameter β. Hence, it is always possible to choose β large enough so that the prior mass condition is satisfied. We refer the reader to Appendix E for an example of computation of such a prior mass and valid values of β. The following theorem expressed as a generalization bound shows that the MMD-Bayes posterior distribution is robust to misspecification under the robust prior mass condition. Note that the rate n −1/2 is exactly the one obtained by the frequentist MMD estimator of and is minimax optimal : Theorem 1 Under the prior mass condition C(π, β): The second theorem investigates concentration of the MMD-Bayes posterior in the wellspecified case. It shows that the prior mass condition C(π, β) ensures that the MMD-Bayes concentrates to P 0 at the minimax rate n −1/2: Theorem 2 Let us consider a well-specified model. Then under the prior mass condition C(π, β), we have in probability for any M n → +∞: Note that we obtain the concentration to the true distribution P 0 = P θ * at the minimax rate n −1/2 for well-specified models. Unfortunately, the MMD-Bayes is not tractable in complex models. In this section, we provide an efficient implementation of the MMD-Bayes based on VI retaining the same theoretical properties. Given a variational set of tractable distributions F, we define the variational approximation of π β n as the closest approximation (in KL divergence) to the target MMD posterior:π Under similar conditions to those in Theorems 1 and 2,π β n is guaranteed to be n −1/2 -consistent as the MMD-Bayes. Most works ensuring the consistency or the concentration of variational approximations of posterior distributions use the extended prior mass condition, an extension of the prior mass condition that applies to variational approximations rather than on the distributions they approximate (; ; ; Chérief-; Chérief-a,b). Here, we extend our previous prior mass condition to variational approximations but also to misspecification. In addition to the prior mass condition inspired from , the variational set F must contain probability distributions that are concentrated around the best approximation P θ *. This robust extended prior mass condition can be formulated as follows: Assumption: We assume that there exists a distribution ρ n ∈ F such that: Remark 3 When the restriction of π to the MMD-ball B n centered at θ * of radius n −1/2 belongs to F, then Assumption (4.1) becomes the standard robust prior mass condition, i.e. π(B n) ≥ e −β/n. In particular, when F is the set of all probability measures -that is, in the case where there is no variational approximation -then we recover the standard condition. Theorem 4 Under the extended prior mass condition (4.1), Moreover, if the model is well-specified, then under the prior mass condition C(π, β), we have in probability for any M n → +∞: In this section, we show that the variational approximation is robust in practice when estimating a Gaussian mean and a uniform distribution in the presence of outliers. We consider here a d-dimensional parametric model and a Gaussian mean-field variational set, using componentwise multiplication. Inspired from the stochastic gradient descent of , and based on a U-statistic approximation of the MMD criterion, we design a stochastic gradient descent that is suited to our variational objective. The algorithm is described in details in Appendix G. We perform short simulations to provide empirical support to our theoretical . Indeed, we consider the problem of Gaussian mean estimation in the presence of outliers. The experiment consists in randomly sampling n = 200 i.i.d observations from a Gaussian distribution N but some corrupted observations are replaced by samples from a standard Cauchy distribution C. The fraction of outliers used was ranging from 0 to 0.20 with a step-size of 0.025. We repeated each experiment 100 times and considered the square root of the mean square error (MSE). The plots we obtained demonstrate that our method performs comparably to the componentwise median (MED) and even better as the number of outliers increases, and clearly outperforms the maximum likelihood estimator (MLE). We also conducted the simulations for multidimensional Gaussians and for the robust estimation of the location parameter of a uniform distribution. We refer the reader to Appendix H for more details on these simulations. In this paper, we showed that the MMD-Bayes posterior concentrates at the minimax convergence rate and is robust to model misspecification. We also proved that reasonable variational approximations of this posterior retain the same properties, and we proposed a stochastic gradient algorithm to compute such approximations that we supported with numerical simulations. An interesting future line of research would be to investigate if the i.i.d assumption can be relaxed and if the MMD-based estimator is also robust to dependency in the data. Appendix A. Proof of Theorem 1. In order to prove Theorem 1, we first need two preliminary lemmas. The first one ensures the convergence of the empirical measureP n to the true distribution P 0 (in MMD distance D k) at the minimax rate n −1/2, and which is an expectation variant of Lemma 1 in that holds with high probability: The rate n −1/2 is known to be minimax in this case, see Theorem 1 in. The second lemma is a simple triangle-like inequality that will be widely used throughout the proofs of the paper: Lemma 6 We have for any distributions P, P and Q: Proof The chain of inequalities follow directly from the triangle inequality and inequality 2ab ≤ a 2 + b 2. Let us come back to the proof of Theorem 1. An important point is that the MMDBayes can also be defined using an argmin over the set M 1 + (Θ) of all probability distributions absolutely continuous with respect to π and the Kullback-Leibler divergence KL(· ·): This is an immediate consequence of Donsker and Varadhan's variational inequality, see e.g. Using the triangle inequality, Lemma 5, Lemma 6 for different settings of P, P and Q, and Jensen's inequality: which gives, using Lemma 5 and the triangle inequality again: We remind that θ * = arg min θ∈Θ D k (P θ, P 0). This bound can be formulated in the following way when ρ is chosen to be equal to π restricted to B n: Finally, as soon as the prior mass condition C(π, β) is satisfied, we get: Appendix B. Proof of Theorem 2. In case of well-specification, Formula (3.1) simply becomes according to Jensen's inequality: Hence, it is sufficient to show that the inequality above implies the concentration of the MMD-Bayes to the true distribution. This is a simple consequence of Markov's inequality. Indeed, for any M n → +∞: which guarantees the convergence in mean of π β n D k (P θ, P 0) > M n · n −1/2 to 0, which leads to the convergence in probability of π β n D k (P θ, P 0) > M n ·n −1/2 to 0, i.e. the concentration of MMD-Bayes to P 0 at rate n −1/2. Formula (4.2) can be proven easily as for the proof of Theorem 1. Indeed, we use the expression of the variational approximation of the MMD-Bayes using an argmin over the set F:π This is yet an application of Donsker and Varadhan's lemma. Then, as previously: Hence, under the extended prior mass condition (4.1), we have directly: The proof of Formula (4.3) follows the lines of the proof of Theorem 2. Appendix D. An example of robustness of the MMD distance. In this appendix, we try to give some intuition on the choice of MMD-Bayes rather than the classical regular Bayesian distribution. To do so, we show a simple misspecified example for which the MMD distance is more suited than the classical Kullback-Leibler (KL) divergence used in the Bayes rule in the definition of the classical Bayesian posterior. We consider the Huber's contamination model described as follows. We observe a collection of random variables X 1,..., X n. There are unobserved i.i.d random variables Z 1,..., Z n ∼ Ber and a distribution Q, such that the distribution of X i given Z i = 0 is a Gaussian N (θ 0, σ 2) where the distribution of X i given Z i = 0 is Q. The observations X i's are independent. This is equivalent to considering a true distribution P 0 = (1−)N (θ 0, σ 2)+ Q. Here, ∈ (0, 1/2) is the contamination rate, σ 2 is a known variance and Q is the contamination distribution that is taken here as N (θ c, σ 2), where θ c is the mean of the corrupted observations. The true parameter of interest is θ 0 and the model is composed Gaussian distributions {P θ = N (θ, σ 2)/θ ∈ R d }. The goal in this appendix is to show that we exactly recover the true parameter θ 0 with the minimizer of the MMD distance to the true distribution P 0, whereas it is not the case with the KL divergence. We use a Gaussian kernel k(x, y) = exp(− x − y 2 /γ 2). We have remind that P θ = N (θ, σ 2 I d) where θ ∈ Θ = R d. For independent X and Y following respectively P θ and P θ, we get (and the square of this random variable is a noncentral chi-square random variable:, and then t = −(2σ 2)/γ 2 gives: Thus, and Hence, the minimizer of D k (P 0, P θ) w.r.t θ, i.e the maximizer of: is θ 0 itself as ≤ 1/2. Computation of the KL divergence to the true distribution: In this case, easy computations lead for any θ to: where is the cross-entropy of P θ and P θ, and where N (x|m, σ 2) is the probability density function of N (m, σ 2) evaluated at x. Hence, the minimizer of KL (P 0 P θ) w.r.t θ, i.e the minimizer of: is (1 −)θ 0 + θ c, which can be far away from θ 0 in situations when the corrupted mean θ c is very far from the true parameter θ 0. Appendix E. An example of computation of a robust prior mass. In this appendix, we tackle the computation of a prior mass in the Gaussian mean estimation problem, and we show that it leads to a wide range of values of β satisfying the prior mass condition C(π, β) for a standard normal prior π. We recall that the prior mass condition C(π, β) is satisfied as soon as there exists a function f such that: β ≥ − log π(B n)n. In practice, lower bounds of the form π(B n) ≥ Le −f (θ *) naturally appear when computing the prior mass π(B n). Only f (θ *) depends on the parameter θ * corresponding to the best approximation in the model of the true distribution in the MMD sense, that is the true parameter itself when the model is well-specified. Hence, it is sufficient to choose a value of the temperature parameter β ≥ f (θ *)−log L n in order to obtain the prior mass condition. We conduct the computation in a misspecified case, where we assume that a proportion 1 − of the observations are sampled i.i.d from a σ 2 -variate Gaussian distribution of interest P θ 0, but that the remaining observations are corrupted and can take any arbitrary value. We consider the model of Gaussian distributions {P θ = N (θ, σ 2)/θ ∈ R d }. This adversarial contamination model is more general than Huber's contamination model presented in Appendix D. Note that when = 0, then the model is well-specified and the distribution of interest P θ 0 is also the true distribution P 0. We use the Gaussian kernel k(x, y) = exp(− x − y 2 /γ 2) and the standard normal prior π = N (0, I d). We write the inequality defining parameters θ belonging to B n: Note that when the model is well-specified, the we get θ * = θ 0. According to derivations performed in Appendix D, we have for any θ: Hence, Inequality (E.1) is equivalent to: We denote s n = 4σ 2 +γ 2 2n and B(θ, s n) the ball of radius s n and centered at θ. Let us compute the prior mass of B n: Actually, the point that minimizes θ → e − θ 2 /2 on B(θ *, s n) is θ * (1 + s n / θ *). Thus: We recall the formula of the volume of the d-dimensional ball: Hence: As could be expected for a standard normal prior, the larger the value of θ *, the smaller can be the prior mass. We denote Hence, for the standard normal prior π, values of β leading to consistency of the MMDBayes are: In particular, when γ 2 is of order d, then using Stirling's approximation, we get a lower bound on the valid values of β of order (up to a logarithmic factor): n max θ * 2, d β. Note that when the log-density log p θ (x) is not differentiable, it is often possible to compute the stochastic gradients involving θ 1,..., θ M directly, without using the Monte Carlo samples Y 1,..., Y M. For instance, when the model is a uniform distribution P θ = U([θ − a, θ + a]) and when the kernel can be written as k(x, y) = K(x − y) for some function K (such as Gaussian kernels), we have: Results: The error of our estimators as a function of the contamination ratio is plotted in Figures 1, 2 and 3. These plots show that our method is applicable to various problems and leads to a good estimator for all of them. Indeed, the plots in Figures 1 and 2 show that the MSE for the MMD estimator performs as well as the componentwise median and even better when the number of outliers in the dataset increases, much better than the MLE in the robust Gaussian mean estimation problem, and is not affected that much by the presence of outliers in the data. For the uniform location parameter estimation problem addressed in Figure 3, the MMD estimator is clearly the one that performs the best and is not affected by a reasonable proportion of outliers, contrary to the method of moments which square root of MSE is increasing linearly with and to the MLE that gives inconsistent estimates as soon as there is an outlier in the data.
Robust Bayesian Estimation via Maximum Mean Discrepancy
939
scitldr
The standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions. Latent variable models are powerful tools for constructing highly expressive data distributions and for understanding how high-dimensional observations might possess a simpler representation. Latent variable models are often framed as probabilistic graphical models, allowing these relationships to be expressed in terms of conditional independence. Mixture models, probabilistic principal component analysis , hidden Markov models, and latent Dirichlet allocation are all examples of powerful latent variable models. More recently there has been a surge of interest in probabilistic latent variable models that incorporate flexible nonlinear likelihoods via deep neural networks . These models can blend the advantages of highly structured probabilistic priors with the empirical successes of deep learning . Moreover, these explicit latent variable models can often yield relatively interpretable representations, in which simple interpolation in the latent space can lead to semantically-meaningful changes in high-dimensional observations (e.g.,). It can be challenging, however, to fit the parameters of a flexible latent variable model, since computing the marginal probability of the data requires integrating out the latent variables in order to maximize the likelihood with respect to the model parameters. Typical approaches to this problem include the celebrated expectation maximization algorithm , Markov chain Monte Carlo, and the Laplace approximation. Variational inference generalizes expectation maximization by forming a lower bound on the aforementioned (log) marginal likelihood, using a tractable approximation to the unmanageable posterior over latent variables. The maximization of this lower bound-rather than the true log marginal likelihood-is often relatively straightforward when using automatic differentiation and Monte Carlo sampling. However, a lower bound may be ill-suited for tasks such as posterior inference and other situations where there exists an entropy maximization objective; for example in entropy-regularized reinforcement learning (; ;) which requires minimizing the log probability of the samples under the model. While there is a long history in Bayesian statistics of estimating the marginal likelihood (e.g., ;), we often want high-quality estimates of the logarithm of the marginal likelihood, which is better behaved when the data is high dimensional; it is not as susceptible to underflow and it has gradients that are numerically sensible. However, the log transformation introduces some challenges: Monte Carlo estimation techniques such as importance sampling do not straightforwardly give unbiased estimates of this quantity. Nevertheless, there has been significant work to construct estimators of the log marginal likelihood in which it is possible to explicitly trade off between bias against computational cost (; ;). Unfortunately, while there are asymptotic regimes where the bias of these estimators approaches zero, it is always possible to optimize the parameters to increase this bias to infinity. In this work, we construct an unbiased estimator of the log marginal likelihood. Although there is no theoretical guarantee that this estimator has finite variance, we find that it can work well in practice. We show that this unbiased estimator can train latent variable models to achieve higher test log-likelihood than lower bound estimators at the same expected compute cost. More importantly, this unbiased estimator allows us to apply latent variable models in situations where these models were previously problematic to optimize with lower bound estimators. Such applications include latent variable modeling for posterior inference and for reinforcement learning in high-dimensional action spaces, where an ideal model is one that is highly expressive yet efficient to sample from. Latent variable models (LVMs) describe a distribution over data in terms of a mixture over unobserved quantities. Let p θ (x) be a family of probability density (mass) functions on a data space X, indexed by parameters θ. We will generally refer to this as a "density" for consistency, even when the data should be understood to be discrete; similarly we will use integrals even when the marginalization is over a discrete set. In a latent variable model, p θ (x) is defined via a space of latent variables Z, a family of mixing measures on this latent space with density denoted p θ (z), and a conditional distribution p θ (x | z). This conditional distribution is sometimes called an "observation model" or a conditional likelihood. We will take θ to parameterize both p θ (x | z) and p θ (z) in the service of determining the marginal p θ (x) via the mixture integral: This simple formalism allows for a large range of modeling approaches, in which complexity can be baked into the latent variables (as in traditional graphical models), into the conditional likelihood (as in variational autoencoders), or into both (as in structured VAEs). The downside of this mixing approach is that the integral may be intractable to compute, making it difficult to evaluate p θ (x)-a quantity often referred to in Bayesian statistics and machine learning as the marginal likelihood or evidence. Various Monte Carlo techniques have been developed to provide consistent and often unbiased estimators of p θ (x), but it is usually preferable to work with log p θ (x) and unbiased estimation of this quantity has, to our knowledge, not been previously studied. Fitting a parametric distribution to observed data is often framed as the minimization of a difference between the model distribution and the empirical distribution. The most common difference measure is the forward Kullback-Leibler (KL) divergence; if p data (x) is the empirical distribution and p θ (x) is a parametric family, then minimizing the KL divergence (D KL) with respect to θ is equivalent to Since expectations can be estimated in an unbiased manner using Monte Carlo procedures, simple subsampling of the data enables powerful stochastic optimization techniques, with stochastic gradient descent in particular forming the basis for learning the parameters of many nonlinear models. However, this requires unbiased estimates of ∇ θ log p θ (x), which are not available for latent variable models. Instead, a stochastic lower bound of log p θ (x) is often used and then differentiated for optimization. Though many lower bound estimators (; ;) are applicable, we focus on an importance-weighted evidence lower bound . This lower bound is constructed by introducing a proposal distribution q(z; x) and using it to form an importance sampling estimate of the marginal likelihood: If K samples are drawn from q(z; x) then this provides an unbiased estimate of p θ (x) and the biased "importance-weighted autoencoder" estimator IWAE K (x) of log p θ (x) is given by The special case of K = 1 generates an unbiased estimate of the evidence lower bound (ELBO), which is often used for performing variational inference by stochastic gradient descent. While the IWAE lower bound acts as a useful replacement of log p θ (x) in maximum likelihood training, it may not be suitable for other objectives such as those that involve entropy maximization. We discuss tasks for which a lower bound estimator would be ill-suited in Section 3.4. There are two properties of IWAE that will allow us to modify it to produce an unbiased estimator: First, it is consistent in the sense that as the number of samples K increases, the expectation of IWAE K (x) converges to log p θ (x). Second, it is also monotonically non-decreasing in expectation: These properties are sufficient to create an unbiased estimator using the Russian roulette estimator. In order to create an unbiased estimator of the log probability function, we employ the Russian roulette estimator . This estimator is used to estimate the sum of infinite series, where evaluating any term in the series almost surely requires only a finite amount of computation. Intuitively, the Russian roulette estimator relies on a randomized truncation and upweighting of each term to account for the possibility of not computing these terms. To illustrate the idea, let∆ k denote the k-th term of an infinite series. Assume the partial sum of the series ∞ k=1∆ k converges to some quantity we wish to obtain. We can construct a simple estimator by always computing the first term then flipping a coin b ∼ Bernoulli(q) to determine whether we stop or continue evaluating the remaining terms. With probability 1 − q, we compute the rest of the series. By reweighting the remaining future terms by 1 /(1−q), we obtain an unbiased estimator: To obtain the "Russian roulette" (RR) estimator , we repeatedly apply this trick to the remaining terms. In effect, we make the number of terms a random variable K, taking values in 1, 2,... to use in the summation (i.e., the number of successful coin flips) from some distribution with probability mass function p(K) = P(K = K) with support over the positive integers. With K drawn from p(K), the estimator takes the form: The equality on the right hand of equation 7 holds so long as (i) P(K ≥ k) > 0, ∀k > 0, and (ii) the series converges absolutely, i.e.,; Lemma 3). This condition ensures that the average of multiple samples will converge to the value of the infinite series by the law of large numbers. However, the variance of this estimator depends on the choice of p(K) and can potentially be very large or even infinite (; ; We can turn any absolutely convergent series into a telescoping series and apply the Russian roulette randomization to form an unbiased stochastic estimator. We focus here on the IWAE bound described in Section 2.2. converges absolutely, we apply equation 7 to construct our estimator, which we call SUMO (Stochastically Unbiased Marginalization Objective). The detailed derivation of SUMO is in Appendix A.1. The randomized truncation of the series using the Russian roulette estimator means that this is an unbiased estimator of the log marginal likelihood, regardless of the distribution p(K): where the expectation is taken over p(K) and q(z; x) (see Algorithm 1 for our exact sampling procedure). Furthermore, under some conditions, we have To efficiently optimize a limit, one should choose an estimator to minimize the product of the second moment of the gradient estimates and the expected compute cost per evaluation. The choice of p(K) effects both the variance and computation cost of our estimator. DenotingĜ:= ∇ θŶ and ∆, the Russian roulette estimator is optimal across a broad family of unbiased randomized truncation estimators if the ∆ g k are statistically independent, in which case it has second moment E||Ĝ|| 2/P(K≥k) . While the Algorithm 1 Computing SUMO, an unbiased estimator of log p(x). Input: are not in fact strictly independent with our sampling procedure (Algorithm 1), and other estimators within the family may perform better, we justify our choice by showing that E∆ i ∆ j for i = j converges to zero much faster than E∆ 2 k (Appendices A.2 & A.3). In the following, we assume independence of ∆ g k and choose p(K) to minimize the product of compute and variance. We first show that E||∆.5 ). This implies the optimal compute-variance product is given by 2 ). In our case, this gives P(K ≥ k) = 1 /k, which in an estimator with infinite expected computation and no finite bound on variance. In fact, any p(K) which gives rise to provably finite variance requires a heavier tail than P(K ≥ k) = 1 /k and so will have infinite expected computation. Though we could not theoretically show that our estimator and gradients have finite variance, we empirically find that gradient descent converges -even in the setting of minimizing log probability. We plot ||∆ k || 2 2 for the toy variational inference task used to assess signal to noise ratio in and Rainforth et al. (2018b), and find that they converge faster than 1 k 2 in practice (Appendix A.6). While this indicates the variance is better than the theoretical bound, an estimator having infinite expected computation cost will always be an issue as it indicates significant probability of sampling arbitrarily large K. We therefore modify the tail of the sampling distribution such that the estimator has finite expected computation: We typically choose α = 80, which gives an expected computation cost of approximately 5 terms. One way to improve the RR estimator is to construct it so that some minimum number of terms (denoted here as m) are always computed. This puts a lower bound on the computational cost, but can potentially lower variance, providing a design space for trading off estimator quality against computational cost. This corresponds to a choice of RR estimator in which This computes the sum out to m terms (effectively computing IWAE m) and then estimates the remaining difference with Russian roulette: In practice, instead of tuning parameters of p(K), we set m to achieve a given expected computation cost per estimator evaluation for fair comparison with IWAE and related estimators. The SUMO estimator does not require amortized variational inference, but the use of an "encoder" to produce an approximate posterior q(z; x) has been shown to be a highly effective way to perform rapid feedforward inference in neural latent variable models. We use φ to denote the parameters of the encoder q φ (z; x). However, the gradients of SUMO with respect to φ are in expectation zero precisely because SUMO is an unbiased estimator of log p θ (x), regardless of our choice of q φ (z; x). Nevertheless, we would expect the choice of q φ (z; x) significantly impacts the variance of our estimator. As such, we optimize q φ (z; x) to reduce the variance of the SUMO estimator. We can obtain unbiased gradients in the following way : Notably, the expectation of this estimator depends on the variance of SUMO, which we have not been able to bound. In practice, we observe gradients which are sometimes very large. We apply gradient clipping to the encoder to clip gradients which are excessively large in magnitude. This helps stabilize the training progress but introduces bias into the encoder gradients. Fortunately, the encoder itself is merely a tool for variance reduction, and biased gradients with respect to the encoder can still significantly help optimization. Here we list some applications where an unbiased log probability is useful. Using SUMO to replace existing lower bound estimates allows latent variable models to be used for new applications where a lower bound is inappropriate. As latent variable models can be both expressive and efficient to sample from, they are frequently useful in applications where the data is high-dimensional and samples from the model are needed. Minimizing log p θ (x). Some machine learning objectives include terms that seek to increase the entropy of the learned model. The "reverse KL" objective-often used for training models to perform approximate posterior inferences-minimizes E x∼p θ (x) [log p θ (x) − log π(x)] where π(x) is a target density that may only be known up a normalization constant. Local updates of this form are the basis of the expectation propagation procedure . This objective has also been used for distilling autoregressive models that are inefficient at sampling . Moreover, reverse KL is connected to the use of entropy-regularized objectives (; ; ;) in decision-making problems, where the goal is to encourage the decision maker toward exploration and prevent it from settling into a local minimum. Unbiased score function ∇ θ log p θ (x). The score function is the gradient of the log-likelihood with respect to the parameters and has uses in estimating the Fisher information matrix and performing stochastic gradient Langevin dynamics , among other applications. Of particular note, the REINFORCE gradient estimator -generally applicable for optimizing objectives of the form max θ E x∼p θ (x) [R(x)]-is estimated using the score function. This can be replaced with the gradient of SUMO which itself is an estimator of the score func- where the inner expectation is over the stochasticity of the SUMO estimator. Such estimators are often used for reward maximization in reinforcement learning where p θ (x) is a stochastic policy. There is a long history in Bayesian statistics of marginal likelihood estimation in the service of model selection. The harmonic mean estimator , for example, has a long (and notorious) history as a consistent estimator of the marginal likelihood that may have infinite variance and exhibits simulation psuedo-bias . The Chib estimator , the Laplace approximation, and nested sampling are alternative proposals that can often have better properties . Annealed importance sampling probably represents the gold standard for marginal likelihood estimation. These, however, turn into consistent estimators at best when estimating the log marginal probability (a). Bias removal schemes such as jackknife variational inference have been proposed to debias log-evidence estimation, IWAE in particular. Hierarchical IWAE uses a joint proposal to induce negative correlation among samples and connects the convergence of variance of the estimator and the convergence of the lower bound. Russian roulette also has a long history. It dates back to unpublished work from von Neumann and Ulam, who used it to debias Monte Carlo methods for matrix inversion and particle transport problems . It has gained popularity in statistical physics (; ;), for unbiased ray tracing in graphics and rendering , and for a number of estimation problems in the statistics community (; ; ; ;). It has also been independently rediscovered many times (; ; ;). The use of Russian roulette estimation in deep learning and generative modeling applications has been gaining traction in recent years. It has been used to solve short-term bias in optimization problems . Though we extend latent variable models to applications that require unbiased estimates of log probability and benefit from efficient sampling, an interesting family of models already fulfill these requirements. Normalizing flows offer exact log probability and certain models have been proven to be universal density estimators . However, these models often require restrictive architectural choices with no dimensionalityreduction capabilities, and make use of many more parameters to scale up than alternative generative models. Discrete variable versions of these models are still in their infancy and make use of biased gradients , whereas latent variable models naturally extend to discrete observations. We first compare the performance of SUMO when used as a replacement to IWAE with the same expected cost on density modeling tasks. We make use of two benchmark datasets: dynamically binarized MNIST and binarized OMNIGLOT . We use the same neural network architecture as IWAE . The prior p(z) is a 50-dimensional standard Gaussian distribution. The conditional distributions p(x i |z) are independent Bernoulli, with the decoder parameterized by two hidden layers, each with 200 tanh units. The approximate posterior q(z; x) is also a 50-dimensional Gaussian distribution with diagonal covariance, whose mean and variance are both parameterized by two hidden layers with 200 tanh units. We reimplemented and tuned IWAE, obtaining strong baseline which are better than those previously reported. We then used the same hyperparameters to train with the SUMO estimator. We find clipping very large gradients can help performance, as large gradients may be infrequently sampled. This introduces a small amount of bias into the gradients while reducing variance, but can nevertheless help achieve faster convergence and should still in a less-biased estimator. A posthoc study of the effect on final test performance as a function of this bias-variance tradeoff mechanism is discussed in Appendix A.7. We note that gradient clipping is only done for the density modeling experiments. The averaged test log-likelihoods and standard deviations over 3 runs are summarized in Table 1. To be consistent with existing literature, we evaluate our model using IWAE with 5000 samples. In all the cases, SUMO achieves slightly better performance than IWAE with the same expected cost. We also bold the that are statistically insignificant from the best performing model according to an unpaired t-test with significance level 0.05. However, we do see diminishing returns as we increase k, suggesting that as we increase compute, the variance of our estimator may impact performance more than the bias of IWAE. We move on to our first task for which a lower bound estimate of log probability would not suffice. The reverse KL objective is useful when we have access to a (possibly unnormalized) target distribution but no efficient sampling algorithm. A major problem with fitting latent variables models to this objective is the presence of an entropy maximization term, effectively a minimization of log p θ (x). Estimating this log marginal probability with a lower bound estimator could in optimizing θ to maximize the bias of the estimator instead of the true objective. Our experiments demonstrate that this causes IWAE to often fail to optimize the objective unless we use a large amount of computation. 85 Figure 1: We trained latent variable models for posterior inference, which requires minimizing log probability under the model. Training with IWAE leads to optimizing for the bias while leaving the true model in an unstable state, whereas training with SUMO-though noisy-leads to convergence. Modifying IWAE. The bias of the IWAE estimator can be interpreted as the KL between an importance-weighted approximate posterior q IW (z; x) implicitly defined by the encoder and the true posterior p(z|x) . Both the encoder and decoder parameters can therefore affect this bias. In practice, we find that the encoder optimization proceeds at a faster timescale than the decoder optimization: i.e., the encoder can match q IW (z; x) to the decoder's p(z|x) more quickly than the latter can match an objective. For this reason, we train the encoder to reduce bias and use a minimax training objective Though this is still a lower bound with unbounded bias, it makes for a stronger baseline than optimizing q(z; x) in the same direction as p(x, z). We find that this approach can work well in practice when k is set sufficiently high. We choose a "funnel" target distribution (Figure 1) similar to the distribution used as a benchmark for inference in , where p * has support in R 2 and is defined p * (x 1, x 2) = N (x 1 ; 0, 1.35 2)N (x 2 ; 0, e 2x1) We use neural networks with one hidden layer of 200 hidden units and tanh activations for both the encoder and decoder networks. We use 20 latent variables, with p(z), p θ (x|z), and q φ (z; x) all being Gaussian distributed. Figure 2 shows the learning curves when using IWAE and SUMO. Unless k is set very large, IWAE will at some point start optimizing the bias instead of the actual objective. The reverse KL is a non-negative quantity, so any estimate significantly below zero can be attributed to the unbounded bias. On the other hand, SUMO Figure 3: Latent variable policies allow faster exploration than autoregressive policy models, while being more expressive than an independent policy. SUMO works well with entropy regularization, whereas IWAE is unstable and converges to similar performance as the non-latent variable model. correctly optimizes for the objective even with a small expected cost. Increasing the expected cost k for SUMO reduces variance. For the same expected cost, SUMO can optimize the true objective but IWAE cannot. We also found that if k is set sufficiently large, then IWAE can work when we train using the minimax objective in equation 15, suggesting that a sufficiently debiased estimator can also work in practice. However, this requires much more compute and likely does not scale compared to SUMO. We also visualize the contours of the ing models in Figure 1. For IWAE, we visualize the model a few iterations before it reaches numerical instability. Let us now consider the problem of finding the maximum of a non-differentiable function, a special case of reinforcement learning without an interacting environment. Variational optimization can be used to reformulate this as the optimization of a parametric distribution, which is now a differentiable function with respect to the parameters θ, whose gradients can be estimated using a combination of the REINFORCE gradient estimator and the SUMO estimator (equation 13). Furthermore, entropy regularized reinforcement learning-where we maximize R(x) + λH(p θ) with H(p θ) being the entropy of p θ (x)-encourages exploration and is inherently related to minimizing a reverse KL objective with the target being an exponentiated reward . For concreteness, we focus on the problem of quadratic pseudo-Boolean optimization (QPBO) where the objective is to maximize where ∈ {0, 1} are binary variables. Without further assumptions, QPBO is NPhard . As there exist complex dependencies between the binary variables and optimization of equation 16 requires sampling from the policy distribution p θ (x), a model that is both expressive and allows efficient sampling would be ideal. For this reason, we motivate the use of latent variable models with independent conditional distributions, which we trained using the SUMO objective. Our baselines are an autoregressive policy, which captures dependencies but for which sampling must be performed sequentially, and an independent policy, which is easy to sample from but captures no dependencies. We note that also argued for latent variable policies in favor of learning diverse strategies but ultimately had to make use of normalizing flows which did not require marginalization. We constructed one problem instance for each d ∈ {100, 500}, which we note are already intractable for exact optimization. For each instance, we randomly sampled the weights w i and w ij uniformly from the interval [−1, 1]. Figure 3 shows the performance of each policy model. In general, the independent policy is quick to converge to a local minima and is unable to explore different regions, whereas more complex models have a better grasp of the "frontier" of reward distributions during optimization. The autoregressive works well overall but is much slower to train due to its sequential sampling procedure; with d = 500, it is 19.2× slower than training with SUMO. Surprisingly, we find that estimating the REINFORCE gradient with IWAE in decent performance when no entropy regularization is present. With entropy regularization, all policies improve significantly; however, training with IWAE in this setting in performance similar to the independent model. On the other hand, SUMO works with both REINFORCE gradient estimation and entropy regularization, albeit at the cost of slower convergence due to variance. We introduced SUMO, a new unbiased estimator of the log probability for latent variable models, and demonstrated tasks for which this estimator performs better than standard lower bounds. Specifically, we investigated applications involving entropy maximization where a lower bound performs poorly, but our unbiased estimator can train properly with relatively smaller amount of compute. In the future, we plan to investigate new families of gradient-based optimizers which can handle heavy-tailed stochastic gradients. It may also be fruitful to investigate the use of convex combination of consistent estimators within the SUMO approach, as any convex combination is unbiased, or to apply variance reduction methods to increase stability of training with SUMO. Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. PhD thesis, figshare, 2010. A APPENDIX where z 1,.., z k are sampled independently from q(z; x). And we define the k-th term of the infinite. Using the properties of IWAE in equation 6, we have∆ k (x) ≥ 0, and which means the series converges absolutely. This is a sufficient condition for finite expectation of the Russian roulette estimator (; Lemma 3). Applying equation 7 to the series: Let, Hence our estimator is constructed: And it can be easily seen from equation 22 and equation 23 that SUMO is an unbiased estimator of the log marginal likelihood: A.2 CONVERGENCE OF ∆ k We follow the analysis of JVI , which applied the delta method for moments to show the asymptotic on the bias and variance of IWAE k both at a rate of O(and we define Y k := 1 k k i=1 w i as the sample mean and we have E[ We note that we rely on ||Y k − µ|| < 1 for this power series to converge. This condition was implicitly assumed, but not explicitly noted, in . This condition will hold for sufficiently large k so long as the moments of w i exist: one could bound the probability ||Y k −µ|| ≥ 1 by Chebyshev's inequality or by the Central Limit Theorem. We use the central moments Expanding Eq. 28 to order two gives Since we use cumulative sum to compute Y k and Y k+1, we obtain We note that Without loss of generality, suppose j ≥ k + 1, For clarity, let C k = Y k − µ be the zero-mean random variable. gives the relations Expanding both the sums inside the brackets to order two: We will proceed by bounding each of the terms,,,. First, we decompose C j. Let We know that B k,j is independent of C k and Now we show that is zero: We now investigate: We now show that is zero: Finally, we investigate: Using the relation in equation 36, we have Assume that ∇ θ SUMO is bounded: it is sufficient that ∇ θ IWAE 1 is bounded and that the sampling probabilities are chosen such that the partial sums of by the dominated convergence theorem, as long as SUMO is everywhere differentiable, which is satisfied by all of our experiments. If ReLU neural networks are to be used, one may be able to show the same property using Theorem 5 of , assuming finite higher moments and Lipschitz constant. The IWAE log likelihood estimate is: The gradient of this with respect to λ, where λ is either θ or ψ, is We abbreviate w i:= p θ (x,zi) q ψ (zi|x), and ν i = dwi dλ. In both λ = ψ and λ = θ cases, it suffices to treat the w i and ν i as i.i.d. random variables with finite variance and expectation. Being a likelihood ratio, w i could be ill behaved when the importance sampling distribution q ψ (z i |x) is is particularly mismatched from the true posterior p(z i |x) = p θ (x,zi E z∼p(z) p θ (x,z). However, the analysis from IWAE requires assuming that the likelihood ratios w i = p θ (x,zi) q ψ (zi|x) are bounded, and we adopt this assumption. Reasoning about when this assumption holds, and the behavior of IWAE-like estimators when it does not, is an interesting area for future work. Consider the differences between two gradients: we label ∆ g as follows: We have: We again let Y k denote the kth sample mean 1 k i w i. Then: The sample means Y k andμ k have finite expectation and variance. The variance vanishes as k → ∞ (but the expectation does not change). The second term vanishes at a rate strictly faster than 1 k 2: the variance of φ k goes to zero as k → ∞. But the first term does not: φ k is a biased estimator of φ ∞ so Eφ k does change with k, but it does not necessarily go to zero: A.7 BIAS-VARIANCE TRADEOFF VIA GRADIENT CLIPPING While SUMO is unbiased, its variance is extremely high or potentially infinite. This property leads to poor performance compared to lower bound estimates such as IWAE when maximizing loglikelihood. In order to obtain models with competitive log-likelihood values, we can make use of gradient clipping. This allows us to ignore rare gradient samples with extremely large values due to the heavy-tailed nature of its distribution. Gradient clipping introduces bias in favor of reduced variance. Figure 6 shows how the performance changes as a function of the clipping value, and more importantly, the percentage of clipped gradients. As shown, neither full clipping and no clipping are desirable. We performed this experiment after reporting the in Table 1, so this grid search was not used to tune hyperparameter for our experiments. As bias is introduced, we do not use gradient clipping for entropy maximization or policy gradient (REINFORCE). Figure 6: Test negative log-likelihood against the gradient clipping norm and clipping percentage, when training with SUMO (k=15). In density modeling experiments, all the models are trained using a batch size of 100 and an Amsgrad optimizer with parameters lr = 0.001, β 1 = 0.9, β 2 = 0.999 and = 10 −4. The learning rate is reduced by factor 0.8 with a patience of 50 epochs. We use gradient norm scaling in both the inference and generative networks. We train SUMO using the same architecture and hyperparameters as IWAE except the gradient clipping norm. We set the gradient norm to 5000 for encoder and {20, 40, 60} for decoder in SUMO. For IWAE, the gradient norm is fixed to 10 in all the experiments. We report the performance of models with early stopping if no improvements have been observed for 300 epochs on the validation set. We add additional plots of the test NLL against the norm and percentage of gradients clipped for the decoder in Figure 6. The plot is based on MNIST with expected number of compute k = 15. Gradient clipping was not used in the other experiments except the density modeling ones, where it can be used as a tool to obtain a better bias-variance trade-off. A.8.1 REVERSE KL AND COMBINATORIAL OPTIMIZATION These two tasks use the same encoder and decoder architecture: one hidden layer with tanh nonlinearities and 200 hidden units. We set the latent state to be of size 20. The prior is a standard Gaussian with diagonal covariance, while the encoder distribution is a Gaussian with parameterized diagonal covariance. For reverse KL, we used independent Gaussian conditional likelihoods for p(x|z), while for combinatorial optimization we used independent Bernoulli conditional distribu-tions. We found it helps stablize training for both IWAE and SUMO to remove momentum and used RMSprop with learning rate 0.00005 and epsilon 1e-3 for fitting reverse KL. We used Adam with learning rate 0.001 and epsilon 1e-3, plus standard hyperparameters for the combinatorial optimization problems. SUMO used an expected compute of 15 terms, with m = 5 and the tail-modified telescoping Zeta distribution.
We create an unbiased estimator for the log probability of latent variable models, extending such models to a larger scope of applications.
940
scitldr
This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically arises for small or moderate batch sizes, and a curvature dominated regime which typically arises when the batch size is large. In the noise dominated regime, the optimal learning rate increases as the batch size rises, and the training loss and test accuracy are independent of batch size under a constant epoch budget. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade as the batch size rises. We support these claims with experiments on a range of architectures including ResNets, LSTMs and autoencoders. We always perform a grid search over learning rates at all batch sizes. Second, we demonstrate that small or moderately large batch sizes continue to outperform very large batches on the test set, even when both models are trained for the same number of steps and reach similar training losses. Furthermore, when training Wide-ResNets on CIFAR-10 with a constant batch size of 64, the optimal learning rate to maximize the test accuracy only decays by a factor of 2 when the epoch budget is increased by a factor of 128, while the optimal learning rate to minimize the training loss decays by a factor of 16. These confirm that the noise in stochastic gradients can introduce beneficial implicit regularization. Stochastic gradient descent (SGD) is the most popular optimization algorithm in deep learning, but it remains poorly understood. A number of papers propose simple scaling rules that predict how changing the learning rate and batch size will influence the final performance of popular network architectures (; ; Jastrzębski et al., 2017). Some of these scaling rules are contradictory, and argue that none of these simple prescriptions work reliably across multiple architectures. Some papers claim SGD with Momentum significantly outperforms SGD without Momentum , but others observe little difference between both algorithms in practice . We hope to clarify this debate. We argue that minibatch stochastic gradient descent exhibits two regimes with different behaviours: a noise dominated regime and a curvature dominated regime (b; ;). The noise dominated regime typically arises for small or moderate batch sizes, while the curvature dominated regime typically arises when the batch size is large. The curvature dominated regime may also arise if the epoch budget is small or the loss is poorly conditioned . Our extensive experiments demonstrate that, 1. In the noise dominated regime, the final training loss and test accuracy are independent of batch size under a constant epoch budget, and the optimal learning rate increases as the batch size rises. In the curvature dominated regime, the optimal learning rate is independent of batch size, and the training loss and test accuracy degrade with increasing batch size. The critical learning rate which separates the two regimes varies between architectures. 2. If specific assumptions are satisfied, then the optimal learning rate is proportional to batch size in the noise dominated regime. These assumptions hold for most tasks. However we observe a square root scaling rule when performing language modelling with an LSTM. This is not surprising, since consecutive gradients in a language model are not independent. 3. SGD with Momentum and learning rate warmup do not outperform vanilla SGD in the noise dominated regime, but they can outperform vanilla SGD in the curvature dominated regime. There is also an active debate regarding the role of stochastic gradients in promoting generalization. It has been suspected for a long time that stochastic gradients sometimes generalize better than full batch gradient descent . This topic was revived by , who showed that the test accuracy often falls if one holds the learning rate constant and increases the batch size, even if one continues training until the training loss ceases to fall. Many authors have studied this effect (Jastrzębski et al., 2017;), but to our knowledge no paper has demonstrated a clear generalization gap between small and large batch training under a constant step budget on a challenging benchmark while simultaneously tuning the learning rate. This phenomenon has also been questioned by a number of authors. argued that one can reduce the generalization gap between small and large batch sizes if one introduces additional regularization (we note that this is consistent with the claim that stochastic gradients can enhance generalization). suggested that a noisy quadratic model is sufficient to describe the performance of neural networks on both the training set and the test set. In this work, we verify that small or moderately large batch sizes substantially outperform very large batches on the test set in some cases, even when compared under a constant step budget. However the batch size at which the test accuracy begins to degrade can be larger than previously thought. We find that the test accuracy of a 16-4 Wide-ResNet trained on CIFAR-10 for 9725 updates falls from 94.7% at a batch size of 4096 to 92.8% at a batch size of 16384. When performing language modelling with an LSTM on the Penn TreeBank dataset for 16560 updates , the test perplexity rises from 81.7 to 92.2 when the batch size rises from 64 to 256. We observe no degradation in the final training loss as the batch size rises in either model. These surprising motivated us to study how the optimal learning rate depends on the epoch budget for a fixed batch size. As expected, the optimal test accuracy is maximized for a finite epoch budget, consistent with the well known phenomenon of early stopping . Meanwhile the training loss falls monotonically as the epoch budget increases, consistent with classical optimization theory. More surprisingly, the learning rate that maximizes the final test accuracy decays very slowly as the epoch budget increases, while the learning rate that minimizes the training loss decays rapidly. 1 These provide further evidence that the noise in stochastic gradients can enhance generalization in some cases, and they suggest novel hyper-parameter tuning strategies that may reduce the cost of identifying the optimal learning rate and optimal epoch budget. We describe the noise dominated and curvature dominated regimes of SGD with and without Momentum in section 2. We focus on the analogy between SGD and stochastic differential equations (; ; ;), but our primary contributions are empirical and many of our can be derived from different assumptions (b;). In section 3, we provide an empirical study of the relationship between the optimal learning rate and the batch size under a constant epoch budget, which verifies the existence of the two regimes in practice. In section 4, we study the relationship between the optimal learning rate and the batch size under a constant step budget, which confirms that stochastic gradients can introduce implicit regularization enhancing the test set accuracy. Finally in section 5, we fix the batch size and consider the relationship between the optimal learning rate and the epoch budget. Full batch gradient descent is in the "curvature dominated" regime, where the optimal learning rate is determined by the curvature of the loss function. The full batch gradient descent update on the i th step is given by ω i+1 = ω i − dC dω ω=ωi, where the loss C(ω) = 1 N N j=1 C(ω, x j) is a function of the parameters ω and the training inputs {x j} N j=1, and denotes the learning rate. In order to minimize the loss as quickly as possible, we will set the learning rate at the start of training as large as we can while avoiding divergences or instabilities. To build our intuition for this, we approximate the loss by a strictly convex quadratic, C(ω) ≈ 1 2 ω T Hω. For simplicity, we assume the minimum lies at ω = 0. Substituting this approximation into the parameter update, we conclude ω i+1 = ω i − ω T i H. In the eigenbasis of H, ω i+1 = ω i (I − Λ), where I denotes the identity matrix and Λ denotes a diagonal matrix comprising the eigenvalues of H. The iterates will converge so long as the learning rate < crit, where crit = 2/λ max is the critical learning rate above which training diverges, and λ max is the largest Hessian eigenvalue. We call this inequality the curvature constraint, and the optimal initial learning rate with full batch gradients will be just below crit. For clarity, although the critical learning rate will perform poorly for high curvature directions of the loss, we can introduce learning rate decay to minimize the loss along these directions later in training. Of course, in realistic loss landscapes this critical learning rate might also change during training. Acceleration methods such as Heavy-Ball Momentum (referred to as Momentum from here on) were designed to enable faster convergence on poorly conditioned loss landscapes. Momentum works by taking an exponential moving average of previous gradients,, where m denotes the momentum coefficient. Gradients in high curvature directions, which often switch sign between updates, partially cancel out. This enables Momentum to take larger steps in low curvature directions while remaining stable in high curvature directions. On quadratic losses for example, Momentum increases the critical learning rate: crit ≤ 2(1 + m)/λ max , and can minimize the training loss in fewer steps than full batch gradient descent. In practice we do not compute a full batch gradient, we estimate the gradient over a minibatch . This introduces noise into our parameter updates, and this noise will play a crucial role in the training dynamics in some cases. However when the batch size is large, and the number of training epochs is finite, the noise in the parameter updates is low, and so typically most of training is governed by the curvature of the loss landscape (similar to full batch gradient descent). We call this large batch training regime curvature dominated. When the batch size is in the curvature dominated regime, we expect the optimal initial learning rate to be determined by the critical learning rate crit, and for SGD with Momentum to outperform SGD without Momentum. On the other hand, when the batch size is small, typically most of the training process is governed by the noise in the parameter updates, and we call this small batch training regime noise dominated. In order to build a model of the training dynamics in the noise dominated regime, we must make some assumptions. Following previous work (; ; Jastrzębski et al., 2017;), we assume the gradients of individual examples are independent samples from an underlying distribution, and that this distribution is not heavy tailed. When the training set size N B, the batch size B 1, and B N, we can apply the central limit theorem to model the noise in a gradient update by a Gaussian noise source δ, whose covariance is inversely proportional to the batch size. We therefore approximate the SGD update by, To interpret this update, we introduce the temperature T = /B to obtain, Equation 2 describes the discretization of a stochastic differential equation with step size and temperature T , and we expect the dynamics of SGD to follow this underlying stochastic differential equation, so long as the learning rate crit and the assumptions above are satisfied. When equation 2 holds and the learning rate crit, any two training runs with the same temperature and the same epoch budget should achieve similar performance on both the training set and the test set. Consequently, we usually expect the learning rate to scale linearly with the batch size in the noise dominated regime, and this was observed in many empirical studies (; ; Jastrzębski et al., 2017;). For completeness, we derive this linear scaling rule in appendix B, and we demonstrate that the linear scaling rule can be derived without assuming that the batch size B 1. However the remaining assumptions above are required, and they are not always satisfied. This linear scaling rule is therefore not valid in all cases . Empirically however, we have found that all batch sizes in the noise dominated regime achieve similar test accuracies and training losses under a constant epoch budget, even when the optimal learning rate does not obey linear scaling. Further observations on the two regimes: Many previous works have established that SGD with and without Momentum are equivalent in the small learning rate limit when m is fixed (; ;). In this limit, the speed of convergence of SGD with Momentum is governed by the effective learning rate ef f = /(1 − m), and the temperature T = ef f /B (; . We therefore expect SGD with and without Momentum to achieve the same final training losses and test accuracies in the noise dominated regime (where the optimal learning rate is smaller than crit). Supporting this claim, confirmed in a recent empirical study that SGD with and without Momentum achieve similar test accuracies in the small batch limit, while SGD with Momentum outperforms vanilla SGD in the large batch limit. In recent years, a number of authors have exploited large batch training and parallel computation to minimize the wallclock time of training deep networks (; ; ; ;). succeeded in training ResNet-50 to over 76% accuracy in under one hour, and since then this has fallen to just a few minutes . also introduced learning rate warmup, and found that it enabled stable training with larger batch sizes; a which we confirm in this work. This procedure has a straightforward interpretation within the two regimes. If the critical learning rate increases early in training, then learning rate warmup will increase the largest stable learning rate, which in turn enables efficient training with larger minibatches. Learning rate schedules: In the noise dominated regime, the learning rate increases as the batch size rises, and therefore as we increase the batch size we will eventually invalidate the assumption crit and enter the curvature dominated regime. There may be a transition phase between the two regimes , although our experiments suggest this transition can be surprisingly sharp in practice. We note that, with the optimal learning rate schedule, many batch sizes might exhibit both the curvature dominated regime (typically early in training) and the noise dominated regime (typically towards the end of training). For example, on simple quadratic loss landscapes at any batch size, the optimal learning rate schedule for minimizing the training loss begins with an initial learning rate close to crit, followed by learning rate decay . However in practice it is not possible to identify the optimal learning rate schedule within a realistic computation budget. Practitioners prefer simple learning rate schedules, often parameterized by an initial learning rate and a few sharp drops . These schedules are easy to tune and are also thought to generalize well. For these popular schedules, the optimal initial learning rate would be determined by whether most of the training process is noise dominated or curvature dominated. Furthermore, suggested that there may be an optimal temperature at early times that promotes generalization. If this speculation is correct, then the optimal learning rate schedule to maximize the test acccuracy would select the noise dominated regime throughout the whole of training when the batch size is small. Of course, if the loss is extremely poorly conditioned, the critical learning rate may already be optimal when the batch size is 1, although we have never seen this in practice 2. Implicit regularization: In the noise dominated regime, the temperature T defines the influence of gradient noise on the dynamics. In the noisy quadratic model, our goal during training is to minimize the effect this noise has on the final parameters . However recently many authors have argued that minibatch noise can be beneficial, helping us to select final parameters which perform well on the test set (; Jastrzębski et al., 2017;). In this alternative perspective, there may be an "optimal temperature" early in training, which drives the parameters towards regions that generalize well ). The noisy quadratic model predicts that increasing the batch size may increase the final training loss under a constant epoch budget, but that it should not increase the final training loss under a constant step budget. Note that this model does not make explicit predictions about the test accuracy. We note that argued noise enables SGD to escape saddle points in non-convex landscapes, which could enable small batch sizes to achieve lower training losses under both constant epoch and constant step budgets. The optimal temperature perspective predicts that increasing the batch size may increase the final training loss under a constant epoch budget, but it does not predict what will happen to the training loss under a constant step budget. Crucially, it predicts that beyond some threshold batch size (which may be very large), increasing the batch size will decrease the test accuracy under both constant epoch and constant step budgets 3. We report the performance of SGD with and without Momentum, and the combination of SGD with Momentum and learning rate warmup. We perform a grid search to identify the optimal learning rate which maximizes the test accuracy, and report the mean performance of the best 12 of 15 runs. a) The test accuracy is independent of batch size when the batch size is small, but begins to fall when the batch size exceeds 512. b) Similarly, the training loss at the optimal effective learning rate is independent of batch size when the batch size is small, but rises rapidly if acceleration techniques are not used when the batch size is large. c) The optimal effective learning rate is proportional to batch size when the batch size is small, while it is independent of batch size when the batch size is large. In order to verify the existence of the two regimes of SGD, we study how the performance on the training and test sets, and the optimal learning rate, depend on the batch size under a constant epoch budget (when using a realistic learning rate decay schedule). To explore this, we must first select a set of model architectures and datasets, and then identify a single learning rate decay schedule that performs well across all of these tasks, matching the baseline performance of the schedules reported in the original papers. For clarity, in the main text we only report experiments using Wide-ResNets on CIFAR-10 , however we provide additional experiments using ResNet-50, LSTMs and autoencoders in appendix D (; ;). For each experiment in this section, we train for the same number of epochs N epochs reported in the original papers (e.g., 200 epochs on CIFAR-10). Our chosen schedule is the following. We hold the learning rate constant for the first N epochs /2 epochs. Then for the remainder of training, we reduce the learning rate by a factor of 2 every N epochs /20 epochs. This scheme has a single hyper-parameter, the initial learning rate, and we found this schedule to reliably meet the performance reported by the authors of the original papers. In some of our experiments we also introduce learning rate warmup , whereby the learning rate is linearly increased from 0 to over the first 5 epochs of training. We illustrate these schedules in appendix A. We will evaluate the optimal test accuracy and the optimal learning rate for a range of batch sizes. At each batch size, we train the model 15 times for a range of learning rates on a logarithmic grid. For each learning rate in this grid, we take the best 12 runs and evaluate the mean and standard deviation of their test accuracy. The optimal test accuracy is defined by the maximum value of this mean, and the corresponding learning rate is the optimal learning rate. This procedure ensures our are not corrupted by outliers or failed training runs. To define error bars on the optimal learning rate, we include any learning rate whose mean accuracy was within one standard deviation of the mean accuracy of the optimal learning rate, and we always verify that both the optimal learning rate and the error bars are not at the boundary of our learning rate grid. We apply data augmentation including padding, random crops and left-right flips. The momentum coefficient m = 0.9, the L2 regularization coefficient is 5 × 10 −4, and we use ghost batch normalization with a ghost batch size of 64 . We also report the mean final training loss at the optimal learning rate. We note that although we tune the learning rate on the test set, our goal in this paper is not to report state of the art performance. Our goal is to compare the performance at different batch sizes and with different training procedures. We apply the same experimental protocol in each case . We also provide the full of a learning rate sweep at two batch sizes in appendix D. In figure 1a, we plot the optimal test accuracy for a range of batch sizes with a 16-4 Wide-ResNet, trained with batch normalization using SGD with and without Momentum, and also with learning rate warmup. All three methods have the same optimal test accuracy when the batch size is small, but both SGD with Momentum and learning rate warmup outperform SGD without Momentum when the batch size is large. The optimal test accuracy is independent of batch size when the batch size is small, but begins to fall when the batch size grows. A very similar trend is observed for the final training loss at the optimal effective learning rate in figure 1b. To understand these , we plot the optimal effective learning rate against batch size in figure 1c (for SGD, ef f =). Looking first at the curve for vanilla SGD, the learning rate is proportional to the batch size below B ≈ 512, beyond which the optimal learning rate is constant. SGD with Momentum and warmup have the same optimal effective learning rate as SGD in the small batch limit, but their optimal effective learning rates are larger when B > 512. All of these exactly match the theoretical predictions we made in section 2. The behaviour of SGD is strongly influenced by batch normalization (; ; ;). We therefore repeat this experiment without normalization in appendix D. To ensure training is stable we introduce a simple modification to the initialization scheme, "ZeroInit", introduced in a parallel submission and defined for the reader's convenience in appendix C. This modification enables the training of very deep networks, and it reduces the gap in optimal test accuracy between networks trained with and without batch normalization. We observe remarkably similar trends, although the critical learning rate, beyond which the optimal learning rate of SGD is independent of batch size, is significantly smaller. We also provide similar experiments in appendix D for a range of model architectures. In all cases, we observe a transition from a small batch regime, where the learning rate increases with the batch size and SGD with Momentum does not outperform SGD, to a large batch regime, where the learning rate is independent of the batch size and SGD with Momentum outperforms SGD. Under a constant epoch budget, both the training loss and the optimal test accuracy are independent of batch size in the noise dominated regime, but they begin to fall when one enters the curvature dominated regime. In most cases the optimal learning rate in the noise dominated regime was proportional to batch size, however for an LSTM trained on the Penn TreeBank dataset, the optimal learning rate was proportional to the square root of the batch size. To understand this, we note that consecutive minibatches in a language model are correlated, which violates the linear scaling assumptions discussed in section 2. In the previous section, we studied how the optimal learning rate depends on the batch size under a constant epoch budget. As predicted in section 2, we found that SGD transitions between two regimes with different behaviours in a range of popular architectures. However, these can be explained from a number of different perspectives, including both the interpretation of small learning rate SGD as the discretization of stochastic differential equation , and also a simple noisy quadratic model of the loss landscape . Crucially, the of the previous section do not tell us whether minibatch noise introduces implicit regularization that selects parameters that perform better on the test set, since under a constant epoch budget, when we increase the batch size we reduce the number of training steps. In order to establish whether minibatch noise enhances the test accuracy, we now evaluate how the optimal test accuracy depends on the batch size under a constant step budget. In this scheme, the number of training epochs is proportional to the batch size, which ensures that large batch sizes are able to minimize the training loss. In table 1, we report the optimal test accuracy of our 16-4 Wide-ResNet on CIFAR-10 at batch sizes ranging from 1024 to 16384. For each batch size, we train for 9765 updates using SGD with Momentum. Note that this corresponds to 200 epochs when the batch size is 1024 (to ensure these experiments did not require an unreasonably large epoch budget, we intentionally selected a batch size just below the boundary of the curvature dominated regime in figure 1). Following our previous schedule, we hold the learning rate constant for 4882 updates, and then decay the learning rate by a factor of 2 every 488 steps. We conclude that the optimal test accuracy initially increases with increasing batch size, but it then begins to fall. The optimal test accuracy at batch size 4096 is 94.7%, but the optimal test accuracy at batch size 16384 is just 92.8%. We also report the final training loss, which falls continuously with batch size. Notice that this occurs despite the fact that the optimal learning rate is defined by the test set accuracy. We observed similar when training without batch normalization using zeroInit, which are shown in table 2, and we also provide similar on the autoencoder and LSTM tasks in appendix E. These demonstrate that stochastic gradient noise can enhance generalization, increasing the test accuracy here by nearly 2%. This shows that while the noisy quadratic model may help describe the evolution of the training loss, it does not capture important phenomena observed on the test set in popular networks . While many previous authors have observed that stochastic gradient noise enhances generalization (; Jastrzębski et al., 2017; ;), we believe our experiment is the first to demonstrate this effect when training a well-respected architecture to the expected test accuracy with a properly tuned learning rate schedule. Finally, we emphasize that the implicit regularization introduced by stochastic gradients should be considered complementary to the implicit bias of gradient descent (a; a; ; ; ; ;). We established in section 4 that, in some architectures and datasets, the noise introduced by stochastic gradients does enhance generalization. This motivates the following question; if the batch size is fixed, how does the optimal test accuracy and optimal learning rate depend on the epoch budget? In particular, is the optimal training temperature, defined by the ratio of the learning rate to the batch size, independent of the epoch budget, or does it fall as the number of training epochs increases. To answer this question, we select a batch size of 64, and we evaluate both the optimal test accuracy and the optimal training loss for a range of epoch budgets using SGD with Momentum. As before, we use our standardized learning rate schedule for each epoch budget, described in appendix A. However, to study the effect of the optimal training temperature, we now independently measure both the optimal learning rate to maximize the test accuracy, and the optimal learning rate to minimize the training loss. The optimal test accuracy and optimal training loss are shown in figures 2a and 2b. We train both with and without batch normalization, and we provide the optimal learning rates with batch normalization in figure 2c, and the optimal learning rates without batch normalization in figure 2d. Considering first figure 2a, the optimal test accuracy initially increases as we increase the epoch budget, however with batch normalization it saturates for epoch budgets beyond 800 epochs, while without batch normalization it falls for epoch budgets beyond 400 epochs. This is similar to the well-known phenomenon of early stopping . As expected, in figure 2b, we find that the optimal training loss falls monotonically as the epoch budget increases. Figure 2: The performance of a 16-4 Wide-ResNet on CIFAR-10 using SGD with Momentum and a batch size of 64. We train both with batch normalization, and also without batch normalization using "zeroInit". We identify both the optimal effective learning rate which maximizes the test accuracy and the optimal effective learning rate which minimizes the training loss, and we present the mean performance of the best 12 out of 15 runs. a) Initially the test accuracy rises as the epoch budget increases, however when training without batch normalization it begins to fall beyond 400 training epochs. b) The training loss falls monotonically as the epoch budget rises. c) With batch normalization, the learning rate which minimizes the training loss falls rapidly as the epoch budget rises, while the learning rate which maximizes the test accuracy only varies by a factor of 2 when the epoch budget rises over two orders of magnitude. d) Similarly, without batch normalization using zeroInit, the learning rate which minimizes the training loss falls as the epoch budget rises while the learning rate which maximizes the test accuracy is constant for all epoch budgets considered. Figures 2c and 2d are more surprising. Both with and without batch normalization, the learning rate which minimizes the training loss falls rapidly as the epoch budget rises. This is exactly what one would expect from convergence bounds or the noisy quadratic model (b;). Strikingly however, when training with batch normalization the learning rate which maximizes the test accuracy only falls by a factor of 2 when we increase the epoch budget from 50 to 6400 epochs. Meanwhile when training without batch normalization using zeroInit, the learning rate which maximizes the test accuracy is constant for all epoch budgets considered. These support the claim that when training deep networks on classification tasks, there is an optimal temperature scale early in training ), which biases small batch SGD towards parameters which perform well on the test set. Our also suggest that one might be able to reduce the cost of hyper-parameter tuning by first identifying the optimal learning rate for a modest epoch budget, before progressively increasing the epoch budget until the test accuracy saturates. We provide additional experimental on the LSTM and autoencoder in appendix F. To further investigate whether there is an optimal temperature early in training, we provide an additional experiment in appendix G where we independently tune both the initial learning rate and the final learning rate in our schedule. Our main claims in this section still hold in this experiment. The contributions of this work are twofold. First, we verified that SGD exhibits two regimes with different behaviours. In the noise dominated regime which arises when the batch size is small, the test accuracy is independent of batch size under a constant epoch budget, the optimal learning rate increases as the batch size rises, and acceleration techniques do not outperform vanilla SGD. Meanwhile in the curvature dominated regime which arises when the batch size is large, the optimal learning rate is independent of batch size, acceleration techniques outperform vanilla SGD, and the test accuracy degrades with batch size. If certain assumptions are satisfied, the optimal learning rate in the noise dominated regime is proportional to batch size. These assumptions hold for most tasks. Second, we confirm that a gap in test accuracy between small or moderately large batch sizes and very large batches persists even when one trains under a constant step budget. When training a 16-4 Wide-ResNet on CIFAR-10 for 9765 updates, the test accuracy drops from 94.7% at a batch size of 4096 to 92.5% at a batch size of 16384. We also find that the optimal learning rate which maximizes the test accuracy of Wide-ResNets depends very weakly on the epoch budget while the learning rate which minimizes the training loss falls rapidly as the epoch budget increases. These confirm that stochastic gradients introduce implicit regularization which enhances generalization, and they provide novel insights which could be used to reduce the cost of identifying the optimal learning rate. For clarity, we illustrate our standardized learning rate decay schedule in figure 3. As specified in the main text, if the epoch budget is N epochs, we hold the learning rate constant for N epochs /2, before decaying the learning rate by a factor of 2 every N epochs /20. When learning rate warmup is included, we linearly increase the learning rate to its maximal value over the first 5 epochs of training. (a) Figure 3: Our standardized learning rate schedule, both with and without learning rate warmup. Note that while the learning rate decays at points defined by the fraction of the total epoch budget completed, we perform learning rate warmup for 5 epochs, irrespective of the epoch budget. In the main text, we applied the central limit theorem to approximate a single SGD step by, The temperature T = /B, E (δ i) = 0 and E δ i δ j = F (ω i)δ ij, where F (ω) is the empirical Fisher information matrix. Equation 5 holds so long as the gradient of each training example is an independent and uncorrelated sample from an underlying short tailed distribution. Additionally, it assumes that the training set size N 1, the batch size B 1, and B N. To derive the linear scaling rule, we consider the total change in the parameters over n consecutive parameter updates, The noise When the product of the number of steps n and the learning rate is much smaller than the critical learning rate, n crit, the parameters do not move far enough for the gradients to significantly change, and therefore for all {j, j} greater than 0 and less than n, Equation 5 implies that, While equation 6 implies that E (ξ i) = 0 and E (ξ i ξ i) ≈ F (ω i). We therefore conclude that ξ and δ are both Gaussian random variables from the same distribution. Comparing equation 3 and equation 7, we conclude that n SGD updates at temperature T with learning rate is equivalent to a single SGD step at temperature T with learning rate n. Since the temperature T = /B, this implies that The smaller batch size is in the noise dominated regime, while the larger batch size is in the curvature dominated regime. We provide the final test accuracy at a range of learning rates in figures a and b, and we provide the final training loss at a range of learning rates in figures c and d. We note that SGD, SGD with Momentum, and SGD with Momentum and learning rate warmup always achieve similar final performance in the small learning rate limit, while SGD performs poorly when the learning rate is large. When the batch size is small, the optimal learning rate is also small, and so all three methods perform similarly. When the batch size is large, the optimal learning rate is also large, and SGD underperforms both SGD with Momentum and SGD with Momentum and learning rate warmup. when crit, then simultaneously doubling both the learning rate and the batch size should draw samples from the same distribution over parameters after the same number of training epochs. This prediction is known as the linear scaling rule (; ; ; Jastrzębski et al., 2017; ; ;). Since this linear scaling rule assumes that crit, it usually holds when the batch size is small, which appears to contradict the assumption B 1 above. Crucially however, the distribution of δ i does not matter in practice, since our dynamics is governed by the combined influence of noise over multiple consecutive updates, ξ i = (1/ √ n) n−1 j=0 δ i+j. In other words, we do not require that equation 3 is an accurate model of an single SGD step, we only require that equation 7 is an accurate model of n SGD steps. We therefore conclude that δ i does not need to be Gaussian, we only require that ξ i is Gaussian. The central limit theorem predicts that, if δ i is an independent random sample from a short-tailed distribution, ξ i will be Gaussian if N 1, nB 1 and nB N. If crit, then we can choose 1 n N, and discard the assumption B 1. In the main text, we provided a number of experimental on wide residual networks, trained without batch normalization using "ZeroInit". This simple initialization scheme, introduced in a parallel submission , is now presented here for the reader's convenience. The scheme comprises three simple modifications to the original Wide-ResNet architecture : 1. It introduces a scalar multiplier at the end of each residual branch, just before the residual branch and skip connection merge. This scalar multiplier is initialized to zero. 2. It introduces biases to each convolutional layer, which are initialized to zero. 3. It introduces dropout on the final fully connected layer. Modification 1 is sufficient to train very deep residual networks without batch normalization, while modifications 2 and 3 slightly increase the final test accuracy. We emphasize that dropout is included solely to illustrate that additional regularization is required when batch normalization is removed. Dropout itself is not required to train without batch normalization, and similar or superior could be obtained with alternative regularizers. Throughout this paper, the drop probability is 60%. We report the performance of SGD, SGD w/ Momentum, and SGD w/ momentum using learning rate warmup. We perform a grid search to identify the optimal learning rate which maximizes the test accuracy, and report the mean performance of the best 12 of 15 runs. a) The test accuracy of SGD w/ Momentum is independent of batch size when the batch size is small, but begins to fall when the batch size exceeds 256. The test accuracy of vanilla SGD is falling for all batch sizes considered. b) The training loss at the optimal effective learning rate is independent of batch size when the batch size is small, but rises rapidly if acceleration techniques are not used when the batch size is large. c) The optimal effective learning rate is proportional to batch size when the batch size is small for SGD w/ Momentum, while it is independent of batch size for vanilla SGD. Here we provide some additional studying how the optimal effective learning rate depends on the batch size under a constant epoch budget. Additional on 16-4 Wide-ResNet on CIFAR with batch normalization. In figure 4, we provide additional with the 16-4 Wide-ResNet, trained with batch normalization on CIFAR-10 for 200 epochs. Here we provide the final test set accuracies and the final training set losses for a full learning rate sweep at two batch sizes, 64 and 1024. From figure 4, we see that SGD, SGD with Momentum and warm up learning rates always achieve similar final performance in the small learning rate limit. This confirms previous theoretical work showing the equivalence of SGD and SGD with momentum in the small learning rate limit when the momentum parameter is kept fixed (; ;). Further, we see that SGD performs poorly compared to SGD with Momentum and warmup learning rates when the learning rate is large. When the batch size is small, the optimal learning rates for all three methods are also small, and so all three methods perform similarly. On the other hand, when the batch size is large, the optimal learning rates of SGD with Momentum and warmup learning rates are higher than the optimal learning rate for SGD, and we see Momentum and warmup learning rates start outperforming vanilla SGD. These are entirely consistent with the two regimes of SGD and SGD with Momentum discussed in section 2. 16-4 Wide-ResNet on CIFAR without batch normalization. In figure 5 we present when training our 16-4 Wide-ResNet . We follow the same setup and learning rate schedule described in section 3, and we train for 200 epochs. However we remove batch normalization, and introduce ZeroInit (described in appendix C). The performance of SGD degrades with increasing batch size on both the test set and the training set, while the performance of Momentum with or without learning rate warmup is constant for batch sizes B 256. Above this threshold, the performance of both methods degrades rapidly. These observations are explained by the optimal effective learning rates in figure 5c. Momentum has similar optimal learning rates with and without warmup. In both cases the learning rate initially increases proportional to the batch size before saturating. The optimal learning rate of SGD is curvature bound at all batch sizes considered. ResNet-50 on ImageNet. In table 3, we provide for ResNet-50 trained on ImageNet for 90 epochs at a small range of batch sizes. We follow the modified ResNet-50 implementation of , and we use our standardized learning rate schedule without warmup (see appendix A). We train a single model at each batch size-learning rate pair. SGD with and without Momentum achieve similar test accuracies when the batch size is small, but SGD with Momentum outperforms SGD without Momentum when the batch size is large. The optimal effective learning (a) (b) (c) Figure 6: A fully connected autoencoder, trained on MNIST for 200 epochs. We report the performance of SGD and SGD w/ Momentum. We perform a grid search to identify the optimal learning rate which maximizes the mean-squared error (MSE) on the test set, and report the mean performance of the best 5 of 7 runs. a) The test MSE of SGD w/ Momentum is initially independent of batch size, but it begins to rise when the batch size exceeds 128. The test MSE of vanilla SGD starts rising for batch sizes exceeding 16. b) We see similar phenomena on the training set MSE. c) The optimal effective learning rate is proportional to batch size when the batch size is small for both vanilla SGD and SGD w/ Momentum, while it becomes independent of batch size for larger batch sizes. The optimal effective learning rate in the curvature dominated regime is larger for SGD w/ Momentum. rate is proportional to batch size for all batch sizes considered when using SGD with Momentum, but not when using SGD without Momentum. Fully connected autoencoder on MNIST. In figure 6, we present when training a fullyconnected autoencoder on the MNIST dataset . Our network architecture is described by the sequence of layer widths {784, 1000, 500, 250, 30, 250, 500, 1000, 784}, where 784 denotes the input and output dimensions. For more details on this architecture, refer to. Because of the bottleneck structure of the model, it is known to be a difficult problem to optimize and has often been used as an optimization benchmark . The L2 regularization parameter was set at 10 −5. We use the learning rate decay schedule described in appendix A without learning rate warmup, and we train each model for 200 epochs. As before, we notice that for small batch sizes, the performance of both SGD and SGD w/ momentum is independent of batch size, while performance begins to degrade when the batch size is large. On this model, the performance of SGD begins to degrade at much smaller batch sizes than we observed in residual networks, and consequently SGD w/ momentum starts outperforming SGD at much smaller batch sizes. This is likely due to the poor conditioning of the model due to the bottleneck structure of its architecture. LSTM on Penn TreeBank. Finally in figure 7, we present when training a word-level LSTM language model on the Penn TreeBank dataset (PTB), following the implementation described in Table 3: ResNet-50, trained on ImageNet for 90 epochs. We follow the implementation of , however we introduce our modified learning rate schedule defined in appendix A. We do not use learning rate warmup. We perform a grid search to identify the optimal effective learning rate and report the performance of a single training run. The test accuracies achieved by SGD and Momentum are equal when the batch size is small, but Momentum outperforms SGD when the batch size is large. For SGD with Momentum, the optimal effective learning rate is proportional to batch size for all batch sizes considered, while this linear scaling rule breaks at large batch sizes for SGD. Figure 7: A word-level LSTM language model trained on PTB for 40 epochs. We report the performance of SGD and SGD w/ Momentum. We perform a grid search to identify the optimal learning rate which maximizes the test set perplexity, and report the mean performance of the best 5 of 7 runs. a) The test set perplexity of SGD w/ Momentum is independent of batch size when the batch size is small, but begins to rise when the batch size exceeds 128. The test set perplexity of vanilla SGD starts rising for batch sizes exceeding 64. b) We see similar phenomena on the training set perplexity. c) The optimal effective learning rate is proportional to square root of the batch size when the batch size is small, while it levels off for larger batch sizes. The gradients of consecutive minibatches in a language model are not independent, violating the assumptions behind linear scaling.. The LSTM used has two layers with 650 units per layer. The parameters are initialized uniformly in [−0.05, 0.05]. We apply gradient clipping at 5, as well as dropout with probability 0.5 on the non-recurrent connections. We train the LSTM for 40 epochs using an unroll step of 35, and use the learning rate decay schedule described in appendix A without learning rate warmup. As with the other models tested in this paper, this learning rate schedule reaches the same test perplexity performance as the original schedules reported in. Once again, we see that SGD and SGD w/ momentum have similar performance for small batch sizes. Performance for SGD starts degrading for batch sizes exceeding 64, whereas performance for SGD w/ momentum starts degrading for batch sizes exceeding 128. However, as mentioned in section 3, the optimal learning rate increases as square root of the batch size for small batch sizes, before leveling off at a constant value for larger batch sizes. This is likely due to correlations in consecutive data samples when training the LSTM, which violate the assumptions used to derive the linear scaling rule in section 2. Here we provide some additional studying how the optimal test accuracy depends on the batch size under a constant step budget. In table 4, we train a fully connected auto-encoder on MNIST for 156,250 updates . This corresponds to 200 epochs when the batch size is 64. We described this model in appendix D, and we train using the learning rate schedule defined in appendix A using SGD with Momentum without learning rate warmup. The test set MSE increases as the batch size increases, while the training set MSE falls as the batch size rises. Although the training set MSE appears to rise for a batch size of 4096, this only occurs because the optimal effective learning rate is measured on the test set. The optimal effective learning rate is independent of the batch size, suggesting that the learning rate may be close to curvature dominated regime. In table 5, we train a word-level LSTM on the Penn TreeBank (PTB) dataset for 16560 updates. This corresponds to 40 epochs when the batch size is 64. We described this model in appendix D, and we train using the learning rate schedule defined in appendix A using SGD with Momentum without learning rate warmup. The test perplexity increases significantly as the batch size increases, while the training perplexity falls as the batch size rises. The optimal effective learning rate increases as the batch size rises, suggesting that we are inside the noise dominated regime. We now provide additional experimental to accompany those provided in section 5, where we study whether the optimal training temperature is independent of the epoch budget. We use SGD with Momentum with the momentum parameter m = 0.9 for all our experiments in this section. In figure 8, we present on a word-level LSTM on the PTB dataset for a batch size of 64 and for varying epoch budgets. Note that the original LSTM model in was trained for 39 epochs. The in figure 8 are remarkably similar to those presented in figure 2. As the epoch budget rises, the test set perplexity first falls but then begins to increase. The training set perplexity falls monotonically as the epoch budget increases. Finally, the optimal learning rate which minimizes the test set perplexity is independent of the epoch budget once this epoch budget is not too small, while the optimal learning rate which minimizes the training set perplexity falls. In figure 9, we present on a fully connected autoencoder trained on MNIST for a batch size of 32 and for a range of epoch budgets. Note that the autoencoder presented in section D were trained for 200 epochs. Figures 9a and 9b are similar to figures 2a and 2b in the main text. Initially the test set MSE falls as the epoch budget increases, but then it starts increasing. The training set MSE falls monotonically as the epoch budget rises. In figure 9c however, we notice that the learning rate that minimizes the test set MSE decreases as the epoch budget rises. This is the opposite of what we observed in figures 2 and 8. To further investigate this, in figure 9d we plot the mean test set MSE during training for an epoch budget of 800 for learning rates = 0.004 and = 0.002. We notice that for the larger learning rate = 0.004, the model overfits faster on the training set, causing the test set MSE to rise by the time of the first learning rate drop at 400 epochs. This is consistently the case for all epoch budgets over 200 epochs. To avoid the test set MSE from rising, the optimal learning rate for the test MSE drops to slow down training sufficiently such that there is no overfitting before the first learning rate decay. Meanwhile the optimal learning rate to minimize the training loss is more or less constant. This suggests that early stopping is particularly important in this architecture and dataset, and that it has more influence on the final test performance than stochastic gradient noise. (a) (b) (c) Figure 8: The performance of a word-level LSTM language model trained on the Penn TreeBank dataset using SGD with Momentum and a batch size of 64 at a range of epoch budgets. We identify both the optimal effective learning rate which minimizes the test set perplexity and the optimal effective learning rate which minimizes the training set perplexity, and we present the mean performance of the best 5 out of 7 runs. a) Initially the test set perplexity falls as the epoch budget increases, however it begins to rise beyond 56 training epochs. b) The training set perplexity falls monotonically as the epoch budget rises. c) The learning rate that minimizes the training set perplexity falls as the epoch budget rises, while the learning rate that minimizes the test set perplexity only varies by a factor of 2 when the epoch budget rises over two orders of magnitude. (a) (b) (c) (d) Figure 9: The performance of a fully connected autoencoder on MNIST using SGD with Momentum and a batch size of 32 with varying training epoch budget. We identify both the optimal effective learning rate which minimizes the test set MSE and the optimal effective learning rate which minimizes the training set MSE, and we present the mean performance of the best 5 out of 7 runs. a) Initially the test set MSE falls as the epoch budget increases, and it only starts going up very slightly for large epoch budgets. b) The train set MSE falls monotonically as the the epoch budget rises. c) The learning rate that minimizes the test set MSE decreases, while the learning rate that minimizes the train set MSE remains constant as the epoch budget rises. This is contrary to what we observe in figures 2 and 8. The reason for this is apparent from figure d), where we plot the test set MSE during training for all 7 runs for an epoch budget of 800 for learning rate = 0.004 and = 0.002. We notice that for a larger learning rate, the model overfits on the training set faster, causing the test set MSE to rise by the time of the first learning rate drop at 400 epochs. This suggests that early stopping has more influence on the final test performance in this architecture than stochastic gradient noise. In section 5 and appendix F, we study how the optimal learning rate depends on the epoch budget. In these experiments, we find evidence that there may be an optimal temperature early in training which is beneficial for good generalization performance on classification tasks. However the learning rate schedules used for these experiments have the property that the initial learning rate, which we denote in this section with 0, is coupled with the final learning rate, which we denote in this section with f. More specifically, the final learning rate f = 0 · γ −10, where γ denotes the decay factor, which we set to 2 in the bulk of our experiments (this schedule is described in detail in section A). Coupling the initial and final learning rates make it less clear whether this optimal temperature is important at the start or the end of training, and it is also not clear to what extent our (a) (b) (c) (d) Figure 10: Performance of a 16-4 Wide-ResNet with batch normalization trained on CIFAR-10 using SGD with Momentum at a batch size of 64. We tune the initial and the final learning rates independently, as described in section G. We plot both the optimal initial and final learning rates for maximizing the test set accuracy, as well as the optimal initial and final learning rates for minimizing the training set loss, and we present the mean performance of the best 5 out of 7 runs. a) The test accuracy initially increases with increasing compute budget before saturating for epochs budgets greater than 800. b) Meanwhile the training loss falls monotonically as the epoch budget rises. c) The optimal initial learning rate which maximizes the test accuracy is constant for epoch budgets greater than 400, while the optimal final learning rate decays rapidly as the epoch budget increases. d) The optimal initial learning rate which minimizes the training loss decays slowly as the epoch budget increases, while the optimal final learning rate decays more rapidly. are influenced by our choice of decay factor. Therefore in this section, we perform experiments on varying epoch budgets where we tune both the initial and the final learning rates independently. As in the schedule presented in section A, when training for an epoch budget of N epochs, we use the initial learning rate for the first N epochs /2 epochs, and we then decay the learning rate by a factor of γ every N epochs /20. To define γ, we select both an initial learning rate 0 and a final learning rate f, and we then set γ = (0 / f) 1/10. We do not use learning rate warmup. These experiments require a very large compute budget, and so we only study our 16-4 Wide-ResNet model with batch normalization. In figure 10, we show when training this 16-4 Wide-Resnet on CIFAR-10 at a batch size of 64 using SGD with Momentum. We train for a range of epoch budgets from 50 to 6400 epochs, and we evaluate the optimal initial and final learning rates independently for both maximizing the test set accuracy and minimizing the training loss. From figure 10a and 10b, we see the same trends as observed in section 5 and appendix F. Specifically, when we increase the compute budget, the optimal test set accuracy first increases, and then saturates for epoch budgets greater than 800, while the optimal training loss falls monotonically as the epoch budget grows. In figures 10c and 10d, we plot the optimal initial and final learning rates for maximizing the test set accuracy (10c) and minimizing the training set loss (10d). We make several observations from these plots. First, the optimal initial learning rate for maximizing the test set accuracy decays very slowly as the epoch budget rises, and it is constant for epoch budgets greater than 400. This supports the existence of an optimal temperature early in training which boost generalization performance. Meanwhile, the optimal final learning rate for maximizing the test set accuracy does decay rapidly as the epoch budget increases, which is likely helpful to prevent overfitting at late times. We note that the error bars on the final learning rate are much larger than those on the initial learning rate, suggesting that it is the initial learning rate which is most important to tune in practice. The optimal initial learning rate for minimizing the training loss also decays slowly as the epoch budget rises (decreasing by a factor of 4 to 8 when the epoch budget rises by a factor of 128), while the optimal final learning rate for minimizing the training loss decays much more quickly (roughly by a factor of 128 over the same range). The optimal initial learning rate for maximizing the test accuracy is consistently higher than the optimal initial learning rate for minimizing the training loss, while the optimal final learning rate for maximizing the test accuracy is consistently lower than the optimal final learning rate for minimizing the training loss. These two observations support the widely held belief that learning rate schedules which maintain a high temperature at early times, and then decay the learning rate rapidly at late times generalize well to the test set in some architectures.
Smaller batch sizes can outperform very large batches on the test set under constant step budgets and with properly tuned learning rate schedules.
941
scitldr
Counterfactual Regret Minimization (CFR) is the most successful algorithm for finding approximate Nash equilibria in imperfect information games. However, CFR's reliance on full game-tree traversals limits its scalability and generality. Therefore, the game's state- and action-space is often abstracted (i.e. simplified) for CFR, and the ing strategy is then mapped back to the full game. This requires extensive expert-knowledge, is not practical in many games outside of poker, and often converges to highly exploitable policies. A recently proposed method, Deep CFR, applies deep learning directly to CFR, allowing the agent to intrinsically abstract and generalize over the state-space from samples, without requiring expert knowledge. In this paper, we introduce Single Deep CFR (SD-CFR), a variant of Deep CFR that has a lower overall approximation error by avoiding the training of an average strategy network. We show that SD-CFR is more attractive from a theoretical perspective and empirically outperforms Deep CFR with respect to exploitability and one-on-one play in poker. In perfect information games, players usually seek to play an optimal deterministic strategy. In contrast, sound policy optimization algorithms for imperfect information games converge towards a Nash equilibrium, a distributional strategy characterized by minimizing the losses against a worst-case opponent. The most popular family of algorithms for finding such equilibria is Counterfactual Regret Minimization (CFR) . Conventional CFR methods iteratively traverse the game-tree to improve the strategy played in each state. For instance, CFR + , a fast variant of CFR, was used to solve two-player Limit Texas Hold'em Poker , a variant of poker frequently played by humans. However, the scalability of such tabular CFR methods is limited since they need to visit a given state to update the policy played in it. In games too large to fully traverse, practitioners hence often employ domain-specific abstraction schemes that can be mapped back to the full game after training has finished. Unfortunately, these techniques have been shown to lead to highly exploitable policies in the large benchmark game Heads-Up No-Limit Texas Hold'em Poker (HUNL) and typically require extensive expert knowledge. To address these two problems, researchers started to augment CFR with neural network function approximation, first ing in DeepStack (Moravčík et al., 2017). Concurrently with Libratus, DeepStack was one of the first algorithms to defeat professional poker players in HUNL, a game consisting of 10 160 states and thus being far too large to fully traverse. While tabular CFR has to visit a state of the game to update its policy in it, a parameterized policy may be able to play an educated strategy in states it has never seen before. Purely parameterized (i.e. non-tabular) policies have led to great breakthroughs in AI for perfect information games (; ;) and were recently also applied to large imperfect information games by Deep CFR to mimic a variant of tabular CFR from samples. Deep CFR's strategy relies on a series of two independent neural approximations. In this paper, we introduce Single Deep CFR (SD-CFR), a simplified variant of Deep CFR that obtains its final strategy after just one neural approximation by using what Deep CFR calls value networks directly instead of training an additional network to approximate the weighted average strategy. This reduces the overall sampling-and approximation error and makes training more efficient. We show experimentally that SD-CFR improves upon the convergence of Deep CFR in poker games and outperforms Deep CFR in one-one-one matches. This section introduces extensive-form games and the notation we will use throughout this work. Formally, a finite two-player extensive-form game with imperfect information is a set of histories H, where each history is a path from the root φ ∈ H to any particular state. The subset Z ⊂ H contains all terminal histories. A(h) is the set of actions available to the acting player at history h, who is chosen from the set {1, 2, chance} by the player function P (h). In any h ∈ H where P (h) = chance, the action is chosen by the dynamics of the game itself. Let N = {1, 2} be the set of both players. When referring to a player i ∈ N, we refer to his opponent by −i. All nodes z ∈ Z have an associated utility u(z) for each player. This work focuses on zero-sum games, defined by the property Imperfect information is represented by partitioning H into information sets. An information set I i is a subset of H, where histories h, h ∈ H are in the same information set if and only if player i cannot distinguish between h and h given his private and all available public information. For each player i ∈ N, an information partition I i is a set of all such information sets. Let A(I) = A(h) and P (I) = P (h) for all h ∈ I and each I ∈ I i. Each player i chooses actions according to a behavioural strategy σ i, with σ i (I, a) being the probability of choosing action a when in I. We refer to a tuple (σ 1, σ 2) as a strategy profile σ. Let π σ (h) be the probability of reaching history h if both players follow σ and let π σ i (h) be the probability of reaching h if player i acts according to σ i and player −i always acts deterministically to get to h. It follows that the probability of reaching an information set I if both players follow σ is π σ (I) = h∈I π σ (h) and is π σ i (I) = h∈I π σ i (h) if −i plays to get to I. Player i's expected utility from any history h assuming both players follow strategy profile σ from h onward is denoted by u σ i (h). Thus, their expected utility over the whole game given a strategy profile σ can be written as u Finally, a strategy profile σ = (σ 1, σ 2) is a Nash equilibrium if no player i could increase their expected utility by deviating from σ i while −i plays according to σ −i. We measure the exploitability e(σ) of a strategy profile by how much its optimal counter strategy profile (also called best response) can beat it by. Let us denote a function that returns the best response to σ i by BR(σ i). Formally, Counterfactual Regret Minimization (CFR) is an iterative algorithm. It can run either simultaneous or alternating updates. If the former is chosen, CFR produces a new iteration-strategy σ t i for all players i ∈ N on each iteration t. In contrast, alternating updates produce a new strategy for only one player per iteration, with player t mod 2 updating his on iteration t. To understand how CFR converges to a Nash equilibrium, let us first define the instantaneous regret for player i of action a ∈ A(I) in any I ∈ I i as where v. Intuitively, r t i (I, a) quantifies how much more player i would have won (in expectation), had he always chosen a in I and played to get to I but according to σ t thereafter. The overall regret on iteration T is R (I, a). Now, the iteration-strategy for player i can be derived by where |A(I)|. The iteration-strategy profile σ t does not converge to an equilibrium as t → ∞ in most variants of CFR 1. The policy that has been shown to converge to an equilibrium profile is the average strategȳ σ T i. For all I ∈ I i and each a ∈ A(I) it is defined as Aiming to solve ever bigger games, researchers have proposed many improvements upon vanilla CFR over the years (; Moravčík et al., 2017). These improvements include alternative methods for regret updates (;, automated schemes for abstraction design , and sampling variants of CFR . Many of the most successful algorithms of the recent past also employ real-time solving or re-solving Moravčík et al., 2017). Discounted CFR (DCFR) slightly modifies the equations for R T i (I, a) andσ T i. A special case of DCFR is linear CFR (LCFR), where the contribution of the instantaneous regret of iteration t as well as the contribution of σ t toσ T is weighted by t. This change alone suffices to let LCFR converge up to two orders of magnitude faster than vanilla CFR does in some large games. Monte-Carlo CFR (MC-CFR) proposes a family of tabular methods that visit only a subset of information sets on each iteration. Different variants of MC-CFR can be constructed by choosing different sampling policies. One such variant is External Sampling (ES), which executes all actions for player i, the traverser, in every I ∈ I i but draws only one sample for actions not controlled by i (i.e. those of −i and chance). In games with many player-actions Average Strategy Sampling, Robust Sampling are very useful. They, in different ways, sample only a sub-set of actions for i. Both LCFR and a similarly fast variant called CFR + are compatible with forms of MC sampling, although CFR + was regarded as to sensitive to variance until recently . CFR methods either need to run on the full game tree or employ domain-specific abstractions. The former is infeasible in large games and the latter not easily possible in all domains. Deep CFR computes an approximation of linear CFR with alternating player updates. It is sample-based and does not need to store regret tables, making it generally applicable to any two-player zero-sum game. On each iteration, Deep CFR fits a value networkD i for one player i to approximate what we call advantage, which is defined as D, where In large games, reach-probabilities naturally are (on average) very small after many tree-branchings. Considering that it is hard for neural networks to learn values across many orders of magnitude (van), Deep CFR divides R We can derive the iteration-strategy for t + 1 from D t similarly to CFR in equation 2 by However, Deep CFR modifies this to heuristically choose the action with the highest advantage whenever ã∈A(I) D t i (I,ã) + ≤ 0. Deep CFR obtains the training data forD via batched external sampling (; . All instantaneous regret values collected over the N traversals are stored in a memory buffer B v i . After its maximum capacity is reached, B v i is updated via reservoir sampling . To mimic the behaviour of linear CFR, we need to weight the training losses between the predictionsD makes and the sampled regret vectors in B v i with the iteration-number on which a given datapoint was added to the buffer. At the end of its training procedure (i.e. after the last iteration), Deep CFR fits another neural network S i (I, a) to approximate the linear average strategȳ Data to trainŜ i is collected in a separate reservoir buffer B. Like before, we also need to weight the training loss for each datapoint by the iteration-number on which the datapoint was created. Notice that tabular CFR achieves importance weighting between iterations through multiplying with some form of the reach probability (see equations 1 and 3). In contrast, Deep CFR does so by controlling the expected frequency of datapoints from different iterations occurring in its buffers and by weighting the neural network losses differently for data from each iteration. Notice that storing all iteration-strategies would allow one to compute the average strategy on the fly during play both in tabular and approximate CFR variants. In tabular methods, the gain of not needing to keepσ in memory during training would come at the cost of storing t equally large tables (though potentially on disk) during training and during play. However, this is very different with Deep CFR. Not aggregating intoŜ removes the sampling-and approximation error that B s andŜ introduce, respectively. Moreover, the computational work needed to trainŜ is no longer required. Like in the tabular case, we do need to keep all iteration strategies, but this is much cheaper with Deep CFR as strategies are compressed within small neural networks. We will now look at two methods for queryingσ from a buffer of past value networks B M. Often (e.g. in one-one-one evaluations and during rollouts), a trajectory is played from the root of the game-tree and the agent is only required to return action-samples of the average strategy on each step forward. In this case, SD-CFR chooses a value networkD i at the start of the game, where eachD t i is assigned sampling weight t. The policy σ i, which this network gives by equation 4, is now going to be used for the whole game trajectory. We call this method trajectory-sampling. By applying the sampling weights when selecting aD i ∈ B M i, we satisfy the linear averaging constraint of equation 5, and by using the same σ i for the whole trajectory starting at the root, we ensure that the iteration-strategies are also weighted proportionally to each of their reach-probabilities in any given state along that trajectory. The latter happens naturally, sinceD t i of any t produces σ t i, which reaches each information set I with a likelihood directly proportional to π σ t i (I) when playing from the root. The query cost of this method is constant with the number of iterations (and equal to the cost of querying Deep CFR). Let us now consider querying the complete action probability distributionσ Here, I ∈ I means that I is on the trajectory leading to I and a: I → I is the action selected in I leading to I. This computation can be done with at most 2 as many feedforward passes through each network in B M i as player i had decisions along the trajectory to I, typically taking a few seconds in poker when done on a CPU. If a trajectory is played forward from the root, as is the case in e.g. exploitability evaluation, we can cache the step-wise reach-probabilities on each step I k along the trajectory and compute π, where a is the action that leads from I k to I k+1. This reduces the number of queries per step to at most |B M i |. SD-CFR always mimicsσ T i correctly from the iteration-strategies it is given. Thus, if these iterationstrategies were perfect approximations of the real iteration-strategies, SD-CFR is equivalent to linear CFR (see Theorem 2), which is not necessarily true for Deep CFR (see Theorem 1). As we later show in an experiment, SD-CFR's performance degrades if reservoir sampling is performed on B M after the number of iterations trained exceeds the buffer's capacity. Thankfully, the neural network proposed to be used for Deep CFR in large poker games has under 100,000 parameters and thus requires under 400KB of disk space. Deep CFR is usually trained for just a few hundred iterations, but storing even 25,000 such networks on disk would need only 10GB of disk space. At no point during any computation do we need all networks in memory. Thus, keeping all value networks will not represent a problem in practise. Observing that Deep CFR and SD-CFR depend upon the accuracy of the value networks in exactly the same way, we can conclude that SD-CFR is a better or equally good approximation of linear CFR as long as all value networks are stored. Though this shows that SD-CFR is largely superior in theory, it is not implicit that SD-CFR will always produce stronger strategies empirically. We will investigate this next. We empirically evaluate SD-CFR by comparing to Deep CFR and by analyzing the effect of sampling on B M. Recall that Deep CFR and SD-CFR are equivalent in how they train their value networks. This allows both algorithms to share the same value networks in our experiments, which makes comparisons far less susceptible to variance over algorithm runs and conveniently guarantees that both algorithms tend to the same Nash equilibrium. Where not otherwise noted, we use hyperparamters as. Our environment observations include additional features such as the size of the pot and represent cards as concatenated one-hot vectors without any higher level features, but are otherwise as. In Leduc Poker, players start with an infinite number of chips. The deck has six cards of two suits {a, b} and three ranks {J, Q, K}. There are two betting rounds: preflop and flop. After the preflop, a card is publicly revealed. At the start of the game, each player adds 1 chip, called the ante, to the pot and is dealt one private card. There are at most two bets/raises per round, where the bet-size is fixed at 2 chips in the preflop, and 4 chips after the flop is revealed. If no player folded, the winner is determined via hand strength. If a player made a pair with the public card, they win. Otherwise K > Q > J. If both players hold a card of the same rank, the pot is split. Hyperparameters are chosen to favour Deep CFR as the neural networks and buffers are very large in relation to the size of the game. Yet, we find that SD-CFR minimizes exploitability better than Deep CFR. Exact hyperparameters can be found in the supplementary material. Although we concluded that storing all value networks is feasible, we analyze the effect of reservoir sampling on B M in Figure 1b and find it leads to plateauing and oscillation, at least up to |B M | = 1000. Figure 2 shows the of one-one-one matches between SD-CFR and Deep CFR in 5-Flop Hold'em Poker (5-FHP). 5-FHP is a large poker game similar to regular FHP, which was used to evaluate Deep CFR Table 1: Disagreement between SD-CFR's and Deep CFR's average strategies. "DEPTH": number of player actions up until the measurement, "ROUND": PF=Preflop, FL=Flop, "DIF MEAN": mean and 95% confidence interval of the absolute differences between the strategies over the "N" occurrences. "DIF STD": approximate standard deviation of agreement across information sets. FHP, please refer to. The neural architecture is as. Both algorithms again share the same value networks during each training run. Like, The y-axis plots SD-CFR's average winnings against Deep CFR in milli-big blinds per game (mbb/g) measured every 30 iterations. For reference, 10 mbb/g is considered a good margin between humans in Heads-Up Limit Hold'em (HULH), a game with longer action sequences, but similar minimum and maximum winnings per game as 5-FHP. Measuring the performance on iteration t compares how well the SD-CFR averaging procedure would do against the one of Deep CFR if the algorithm stopped training after t iterations B s reached its maximum capacity of 40 million for both players by iteration 120 in all runs. Before this point, SD-CFR defeats Deep CFR by a sizable margin, but even after that, SD-CFR clearly defeats Deep CFR. We analyze how far the average strategies of SD-CFR and Deep CFR are apart at different depths of the tree of 5-FHP. In particular, we measure We ran 200,000 trajectory rollouts for each player, where player i plays according to SD-CFR's average strategyσ and −i plays uniformly random. Hence, we only evaluate on trajectories on which the agent should feel comfortable. The two agents again share the same value networks and thus approximate the same equilibrium. We trained for 180 iterations, a little more than it takes for B s and B v to be full for both players. Table 1 shows that Deep CFR's approximation is good on early levels of the tree but has a larger error in information sets reached only after multiple decision points. Regression CFR (R-CFR) applies regression trees to estimate regret values in CFR and CFR +. Unfortunately, despite promising expectations, recent work failed to apply R-CFR in combination with sampling . Advantage Regret Minimization (ARM) is similar to R-CFR but was only applied to single-player environments. Nevertheless, ARM did show that regret-based methods can be of interest in imperfect information games much bigger, less structured, and more chaotic than poker. DeepStack (Moravčík et al., 2017) was the first algorithm to defeat professional poker players in one-on-one gameplay of Heads-Up No-Limit Hold'em Poker (HUNL) requiring just a single GPU and CPU for real-time play. It accomplished this through combining real-time solving with counterfactual value approximation with deep networks. Unfortunately, DeepStack relies on tabular CFR methods without card abstraction to generate data for its counterfactual value networks, which could make applications to domains with many more private information states than HUNL has difficult. Neural Fictitious Self-Play (NFSP) was the first algorithm to soundly apply deep reinforcement learning from single trajectory samples to large extensive-form games. While not showing record-breaking in terms of exploitability, NFSP was able to learn a competitive strategy in Limit Texas Hold'em Poker over just 14 GPU/days. Recent literature elaborates on the convergence properties of multi-agent deep reinforcement learning and introduces novel actor-critic algorithms that have similar convergence properties as NFSP and SD-CFR. So far, Deep CFR was only evaluated in games with three player actions. Since external sampling would likely be intractable in games with tens or more actions, one could employ outcome sampling , robust sampling , Targeted CFR , or average-strategy-sampling in such settings. To avoid action translation after training in an action-abstracted game, continuous approximations of large discrete action-spaces where actions are closely related (e.g. bet-size selection in No-Limit Poker games, auctions, settlements, etc.) could be of interest. This might be achieved by having the value networks predict parameters to a continuous function whose integral can be evaluated efficiently. The iteration-strategy could be derived by normalizing the advantage clipped below 0. The probability of action a could be calculated as the integral of the strategy on the interval corresponding to a in the discrete space. Given a few modifications to its neural architecture and sampling procedure, SD-CFR could potentially be applied to much less structured domains than poker such as those that deep reinforcement learning methods like PPO are usually applied to. A first step on this line of research could be to evaluate whether SD-CFR is preferred over approaches such as in these settings. We introduced Single Deep CFR (SD-CFR), a new variant of CFR that uses function approximation and partial tree traversals to generalize over the game's state space. In contrast to previous work, SD-CFR extracts the average strategy directly from a buffer of value networks from past iterations. We show that SD-CFR is more attractive in theory and performs much better in practise than Deep CFR. B v and B s have a capacity of 1 million for each player. On each iteration, data is collected over 1,500 external sampling traversals and a new value network is trained to convergence (750 updates of batch size 2048), initialized randomly at t < 2 and with the weights of the value net from iteration t − 2 afterwards. Average-strategy networks are trained to convergence (5000 updates of batch size 2048) always from a random initialization. All networks used for this evaluation have 3 fully-connected layers of 64 units each, which adds up to more parameters than Leduc Hold'em has states. All other hyperparameters were chosen as in. Leduc Hold'em Poker is a two-player game, were players alternate seats after each round. At the start of the game, both players add 1 chip, the ante, to the pot and are dealt a private card (unknown to the opponent) from a deck consisting of 6 cards: {A, A, B, B, C, C}. There are two rounds: pre-flop and flop. The game starts at the pre-flop and transitions to the flop after both players have acted and wagered the same number of chips. At each decision point, players can choose an action from a subset of {fold,call, raise}. When a player folds, the game ends and all chips in the pot are awarded to the opponent. Calling means matching the opponent's raise. The first player to act in a round has the option of checking, which is essentially a call of zero chips. Their opponent can then bet or also check. When a player raises, he adds more chips to the pot than his opponent wagered so far. In Leduc Hold'em, the number of raises per round is capped at 2. Each raise adds 2 additional chips in the pre-flop round and 4 in the flop round. On the transition from pre-flop to flop, one card from the remaining deck is revealed publicly. If no player folded and the game ends with a player calling, they show their hands and determine the winner by the rule that if a player's private card matches the flop card, they win. Otherwise the player with the higher card according to A B C wins. t i is the acting policy, this also shows that an opponent cannot tell whether the agent is using this sampling method or following an explicitly computedσ T i We conducted experiments searching to investigate the harm caused by the function approximation ofŜ. We found that in variants of Leduc Hold'em with more that 3 ranks and multiple bets, the performance between Deep CFR and SD-CFR was closer. Below we plot the exploitability curves of the early iterations in a variant of Leduc that uses a deck of 12 ranks and allows a maximum of 6 instead of 2 bets per round. We believe the smaller difference in performance is due to the equilibrium in this game being less sensitive to small differences in action probabilities, while the game is still small enough to see every state often during training. In vanilla Leduc, slight deviations from optimal play give away a lot about one's private information as there are just three distinguishable cards. In contrast, this variant of Leduc, despite having more states, might be less susceptible to approximation error as it has 12 distinguishable cards but similarly simple rules. For the plot below, we ran Deep CFR and SD-CFR with shared value networks, where all buffers have a capacity of 4 million. On each iteration, data is collected over 8,800 external sampling traversals and a new value network is trained to convergence (1200 updates of batch size 2816), initialized randomly at t < 2 and with the weights of the value net from iteration t − 2 afterwards. Average-strategy networks are trained to convergence (10000 updates of batch size 5632) from a random initialization. The network architecture used is as, differing only by the card-branch having 64 units per layer instead of 192.
Better Deep Reinforcement Learning algorithm to approximate Counterfactual Regret Minimization
942
scitldr
While generative models have shown great success in generating high-dimensional samples conditional on low-dimensional descriptors (learning e.g. stroke thickness in MNIST, hair color in CelebA, or speaker identity in Wavenet), their generation out-of-sample poses fundamental problems. The conditional variational autoencoder (CVAE) as a simple conditional generative model does not explicitly relate conditions during training and, hence, has no incentive of learning a compact joint distribution across conditions. We overcome this limitation by matching their distributions using maximum mean discrepancy (MMD) in the decoder layer that follows the bottleneck. This introduces a strong regularization both for reconstructing samples within the same condition and for transforming samples across conditions, ing in much improved generalization. We refer to the architecture as transformer VAE (trVAE). Benchmarking trVAE on high-dimensional image and tabular data, we demonstrate higher robustness and higher accuracy than existing approaches. In particular, we show qualitatively improved predictions for cellular perturbation response to treatment and disease based on high-dimensional single-cell gene expression data, by tackling previously problematic minority classes and multiple conditions. For generic tasks, we improve Pearson correlations of high-dimensional estimated means and variances with their ground truths from 0.89 to 0.97 and 0.75 to 0.87, respectively. The task of generating high-dimensional samples x conditional on a latent random vector z and a categorical variable s has established solutions . The situation becomes more complicated if the support of z is divided into different domains d with different semantic meanings: say d ∈ {men, women} and one is interested in out-of-sample generation of samples x in a domain and condition (d, s) that is not part of the training data. If one predicts how a given black-haired man would look with blonde hair, which we refer to as transforming x men, black-hair → x men, blonde-hair, this becomes an out-of-sample problem if the training data does not have instances of blonde-haired men, but merely of blonde-and black-haired woman and blacked haired men. In an application with higher relevance, there is strong interest in how untreated (s = 0) humans (d = 0) respond to drug treatment (s = 1) based on training data from in vitro (d = 1) and mice (d = 2) experiments. Hence, the target domain of interest (d = 0) does not offer training data for s = 1, but only for s = 0. In the present paper, we suggest to address the challenge of transforming out-of-sample by regularizing the joint distribution across the categorical variable s using maximum mean discrepancy (MMD) in the framework of a conditional variational autoencoder (CVAE) . This produces a more compact representation of a distribution that displays high variance in the vanilla CVAE, which incentivizes learning of features across s and in more accurate out-of-sample prediction. MMD has proven successful in a variety of tasks. In particular, matching distributions with MMD in variational autoencoders has been put forward for unsupervised domain adaptation or for learning statistically independent latent dimensions (b). In supervised domain adaptation approaches, MMD-based regularization has been shown to be a viable strategy of learning label-predictive features with domain-specific information removed . In further related work, the out-of-sample transformation problem was addressed via hard-coded latent space vector arithmetics and histogram matching . The approach of the present paper, however, introduce a data-driven end-to-end approach, which does not involve hard-coded elements and generalizes to more than one condition. In representation learning, one aims to map a vector x to a representation z for which a given downstream task can be performed more efficiently. Hierarchical Bayesian models yield probabilistic representations in the form of sufficient statistics for the model's posterior distribution. Let {X, S} denote the set of observed random variables and Z the set of latent variables (Z i denotes component i). Then Bayesian inference aims to maximize the likelihood: Because the integral is in general intractable, variational inference finds a distribution q φ (Z | X, S) that minimizes a lower bound on the data -the evidence lower bound (ELBO): In the case of a variational auto-encoder (VAE), the variational distribution is parametrized by a neural network, both the generative model and the variational approximation have conditional distributions parametrized with neural networks. The difference between the data likelihood and the ELBO is the variational gap: The original AEVB framework is described in the seminal paper for the case Z = {z}, X = {x}, S = ∅. The representation z is optimized to "explain" the data x. The variational distribution can be used to meet different needs: q φ (y | x) is a classifier for a class label y and q φ (z | x) summarizes the data. When using VAE, the empirical data distribution p data (X, S) is transformed to the representationq φ (Z) = E pdata(X,S) q φ (Z | X, S). The case in which S = ∅ is referred to as the conditional variational autoencoder (CVAE) , and a straight-forward extension of the original framework. Let (Ω, F, P) be a probability space. Let X (resp. X) be a separable metric space. Let x: Ω → X (resp. x : Ω → X) be a random variable. Let k: X × X → R (resp. k : X × X → R) be a continuous, bounded, positive semi-definite kernel. Let H be the corresponding reproducing kernel Hilbert space (RKHS) and φ: Ω → H the corresponding feature mapping. Consider the kernel-based estimate of distance between two distributions p and q over the random variables X and X. Such a distance, defined via the canonical distance between their H-embeddings, is called the maximum mean discrepancy and denoted MMD(p, q), with an explicit expression: where the sums run over the number of samples n 0 and n 1 for x and x, respectively. Asymptotically, for a universal kernel such as the Gaussian kernel k(x, x) = e, MMD (X, X) is 0 if and only if p ≡ q. For the implementation, we use multi-scale RBF kernels defined as: Figure 1: The transformer VAE (trVAE) is an MMD-regularized CVAE. It receives randomized batches of data (x) and condition (s) as input during training, stratified for approximately equal proportions of s. In contrast to a standard CVAE, we regularize the effect of s on the representation obtained after the first-layer g 1 (ẑ, s) of the decoder g. During prediction time, we transform batches of the source condition x s=0 to the target condition x s=1 by encodingẑ 0 = f (x 0, s = 0) and decoding g(ẑ 0, s = 1). and γ i is a hyper-parameter. Addressing the domain adaptation problem, the "Variational Fair Autoencoder" (VFAE) uses MMD to match latent distributions q φ (z|s = 0) and q φ (z|s = 1) -where s denotes a domain -by adapting the standard VAE cost function L VAE according to where X and X are two high-dimensional observations with their respective conditions S and S. In contrast to GANs , whose training procedure is notoriously hard due to the minmax optimization problem, training models using MMD or Wasserstein distance metrics is comparatively simple; a) as only a direct minimization of a straight forward loss is involved during the training. It has been shown that MMD based GANs have some advantages over Wasserstein GANs ing in a simpler and faster-training algorithm with matching performance (Bińkowski et al., 2018). This motivated us to choose MMD as a metric for regularizing distribution matching. Let us adapt the following notation for the transformation within a standard CVAE. High-dimensional observations x and a scalar or low-dimensional condition s are transformed using f (encoder) and g (decoder), which are parametrized by weight-sharing neural networks, and give rise to predictorsẑ,ŷ andx:ẑ where we distinguished the first (g 1) and the remaining layers (g 2) of the decoder g = g 2 • g 1 (Fig. 1). While z formally depends on s, it is commonly empirically observed Z ⊥ ⊥ S, that is, the representation z is disentangled from the condition information s. By contrast, the original representation typically strongly covaries with S: X ⊥ ⊥ S. The observation can be explained by admitting that an efficient z-representation, suitable for minimizing reconstruction and regularization losses, should be as free as possible from information about s. Information about s is directly and explicitly available to the decoder (equation 7b), and hence, there is an incentive to optimize the parameters of f to only explain the variation in x that is not explained by s. Experiments below demonstrate that indeed, MMD regularization on the bottleneck layer z does not improve performance. However, even if z is completely free of variation from s, the y representation has a strong s component, Y ⊥ ⊥ S, which leads to a separation of y s=1 and y s=0 into different regions of their support Y. In the standard CVAE, without any regularization of this y representation, a highly varying, non-compact distribution emerges across different values of s (Fig. 2). To compactify the distribution so that it displays only subtle, controlled differences, we impose MMD (equation 4) in the first layer of the decoder (Fig. 1). We assume that modeling y in the same region of the support of Y across s forces learning common features across s where possible. The more of these common features are learned, the more accurately the transformation task will performed, and the higher are chances of successful out-of-sample generation. Using one of the benchmark datasets introduced, below, we qualitatively illustrate the effect (Fig. 2). During training time, all samples are passed to the model with their corresponding condition labels (x s, s). At prediction time, we pass (x s=0, s = 0) to the encoder f to obtain the latent representation z s=0. In the decoder g, we pass (ẑ s=0, s = 1) and through that, let the model transform data tox s=1. The cost function of trVAE derives directly from the standard CVAE cost function, as introduced in the s section, Through duplicating the cost function for X and adding an MMD term, the loss of trVAE becomes: Figure 3: Out-of-sample style transfer for Morpho-MNIST dataset containing normal, thin and thick digits. trVAE successfully transforms normal digits to thin (a) and thick ((b) for digits not seen during training (out-of-sample). We demonstrate the advantages of an MMD-regularized first layer of the decoder by benchmarking versus a variety of existing methods and alternatives: • Vanilla CVAE • CVAE with MMD on bottleneck (MMD-CVAE), similar to VFAE • MMD-regularized autoencoder (b;) • CycleGAN • scGen, a VAE combined with vector arithmetics • scVI, a CVAE with a negative binomial output distribution (a) First, we demonstrate trVAE's basic out-of-sample style transfer capacity on two established image datasets, on a qualitative level. We then address quantitative comparisons of challenging benchmarks with clear ground truth, predicting the effects of biological perturbation based on high-dimensional structured data. We used convolutional layers for imaging examples in section 4.1 and fully connected layers for single-cell gene expression datasets in sections 4.2 and 4.3. The optimal hyper-parameters for each application were chosen by using a parameter gird-search for each model. The detailed hyper-parameters for different models are reported in tables 1-9 in appendix A. Here, we use Morpho-MNIST , which contains 60,000 images each of "normal" and "transformed" digits, which are drawn with a thinner and thicker stroke. For training, we used all normal-stroke data. Hence, the training data covers all domains (d ∈ {0, 1, 2, . . ., 9}) in the normal stroke condition (s = 0). In the transformed conditions (thin and thick strokes, s ∈ {1, 2}), we only kept domains d ∈ {1, 3, 6, 7}. We train a convolutional trVAE in which we first encode the stroke width via two fully-connected layers with 128 and 784 features, respectively. Next, we reshape the 784-dimensional into 28*28*1 images and add them as another channel in the image. Such trained trVAE faithfully transforms digits of normal stroke to digits of thin and thicker stroke to the out-of-sample domains (Fig. 3) Figure 4: CelebA dataset with images in two conditions: celebrities without a smile and with a smile on their face. trVAE successfully adds a smile on faces of women without a smile despite these samples completely lacking from the training data (out-of-sample). The training data only comprises non-smiling women and smiling and non-smiling men. Next, we apply trVAE to CelebA , which contains 202,599 images of celebrity faces with 40 binary attributes for each image. We focus on the task of learning a transformation that turns a non-smiling face into a smiling face. We kept the smiling (s) and gender (d) attributes and trained the model with images from both smiling and non-smiling men but only with non-smiling women. In this case, we trained a deep convolutional trVAE with a U-Net-like architecture . We encoded the binary condition labels as in the Morpho-MNIST example and fed them as an additional channel in the input. Predicting out-of-sample, trVAE successfully transforms non-smiling faces of women to smiling faces while preserving most aspects of the original image (Fig. 4). In addition to showing the model's capacity to handle more complex data, this example demonstrates the flexibility of the the model adapting to well-known architectures like U-Net in the field. Accurately modeling cell response to perturbations is a key question in computational biology. Recently, neural network models have been proposed for out-of-sample predictions of high-dimensional tabular data that quantifies gene expression of single-cells . However, these models are not trained on the task relying instead on hard-coded transformations and cannot handle more than two conditions. We evaluate trVAE on a single-cell gene expression dataset that characterizes the gut after Salmonella or Heligmosomoides polygyrus (H. poly) infections, respectively. For this, we closely follow the benchmark as introduced in . The dataset contains eight different cell types in four conditions: control or healthy cells (n=3,240), H.Poly infection a after three days (H.Poly. Day3, n=2,121), H.poly infection after 10 days (H.Poly. Day10, n=2,711) and salmonella infection (n=1,770) (Fig. 5a). The normalized gene expression data has 1,000 dimensions corresponding to 1,000 genes. Since three of the benchmark models are only able to handle two conditions, we only included the control and H.Poly. Day10 conditions for model comparisons. In this setting, we hold out Tuft infected cells for training and validation, as these consitute the hardest case for out-of-sample generalization (least shared features, few training data). Figure 5b-c shows trVAE accurately predicts the mean and variance for high-dimensional gene expression in Tuft cells. We compared the distribution of Defa24, the gene with the highest change after H.poly infection in Tuft cells, which shows trVAE provides better estimates for mean and variance compared to other models. Moreover, trVAE outperforms other models also when quantifying the correlation of the predicted 1,000 dimensional x with its ground truth (Fig. 5e). In particular, we note that the MMD regularization on the bottleneck layer of the CVAE does not improve performance, as argued above. In order to show our model is able to handle multiple conditions, we performed another experiment with all three conditions included. We trained trVAE holding out each of the eight cells types in all perturbed conditions. Figure 5f shows trVAE can accurately predict all cell types in each perturbed condition, in contrast to existing models. Similar to modeling infection response as above, we benchmark on another single-cell gene expression dataset consisting of 7,217 IFN-β stimulated and 6,359 control peripheral blood mononuclear cells (PBMCs) from eight different human Lupus patients . The stimulation with IFN-β induces dramatic changes in the transcriptional profiles of immune cells, which causes big shifts between control and stimulated cells (Fig. 6a). We studied the out-of-sample prediction of natural killer (NK) cells held out during the training of the model. trVAE accurately predicts mean (Fig. 6b) and variance (Fig. 6c) for all genes in the held out NK cells. In particular, genes strongly responding to IFN-β (highlighted in red in Fig. 6b-c) are well captured. An effect of applying IFN-β is an increase in ISG15 for NK cells, which the model never sees during training. trVAE predicts this change by increasing the expression of ISG15 as observed in real NK cells (Fig. 6d). A cycle GAN and an MMD-regularized auto-encoder (SAUCIE) and other models yield less accurate than our model. Comparing the correlation of predicted mean and variance of gene expression for all dimensions of the data, we find trVAE performs best (Fig. 6e). By arguing that the vanilla CVAE yields representations in the first layer following the bottleneck that vary strongly across categorical conditions, we introduced an MMD regularization that forces these representations to be similar across conditions. The ing model (trVAE) outperforms existing modeling approaches on benchmark and real-world data sets. Within the bottleneck layer, CVAEs already display a well-controlled behavior, and regularization does not improve performance. Further regularization at later layers might be beneficial but is numerically costly and unstable as representations become high-dimensional. However, we have not yet systematically investigated this and leave it for future studies. Further future work will concern the application of trVAE on larger and more data, focusing on interaction effects among conditions. For this, an important application is the study of drug interaction effects, as previously noted by. Future conceptual investigations concern establishing connections to causal-inference-inspired models such as CEVAE : faithful modeling of an interventional distribution might possibly be re-framed as successful perturbation effect prediction across domains. A HYPER-PARAMETERS
Generates never seen data during training from a desired condition
943
scitldr
We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our significantly extend previous analyses, e.g., of deep linear residual networks . Deep learning builds upon the mysterious ability of gradient-based optimization methods to solve related non-convex problems. Immense efforts are underway to mathematically analyze this phenomenon. The prominent landscape approach focuses on special properties of critical points (i.e. points where the gradient of the objective function vanishes) that will imply convergence to global optimum. Several papers (e.g. ;) have shown that (given certain smoothness properties) it suffices for critical points to meet the following two conditions: (i) no poor local minima -every local minimum is close in its objective value to a global minimum; and (ii) strict saddle property -every critical point that is not a local minimum has at least one negative eigenvalue to its Hessian. While condition (i) does not always hold (cf.), it has been established for various simple settings (e.g. ;). Condition (ii) on the other hand seems less plausible, and is in fact provably false for models with three or more layers (cf. Kawaguchi FORMULA1), i.e. for deep networks. It has only been established for problems involving shallow (two layer) models, e.g. matrix factorization (; BID12). The landscape approach as currently construed thus suffers from inherent limitations in proving convergence to global minimum for deep networks. A potential path to circumvent this obstacle lies in realizing that landscape properties matter only in the vicinity of trajectories that can be taken by the optimizer, which may be a negligible portion of the overall parameter space. Several papers (e.g. ; BID1) have taken this trajectory-based approach, primarily in the context of linear neural networks -fully-connected neural networks with linear activation. Linear networks are trivial from a representational perspective, but not so in terms of optimization -they lead to non-convex training problems with multiple minima and saddle points. Through a mix of theory and experiments, BID1 argued that such non-convexities may in fact be beneficial for gradient descent, in the sense that sometimes, adding (redundant) linear layers to a classic linear prediction model can accelerate the optimization. This phenomenon challenges the holistic landscape view, by which convex problems are always preferable to non-convex ones. Even in the linear network setting, a rigorous proof of efficient convergence to global minimum has proved elusive. One recent progress is the analysis of BID3 for linear residual networks -a particular subclass of linear neural networks in which the input, output and all hidden dimensions are equal, and all layers are initialized to be the identity matrix (cf.). Through a trajectory-based analysis of gradient descent minimizing 2 loss over a whitened dataset (see Section 2), BID3 show that convergence to global minimum at a linear rateloss is less than > 0 after O(log 1) iterations -takes place if one of the following holds: (i) the objective value at initialization is sufficiently close to a global minimum; or (ii) a global minimum is attained when the product of all layers is positive definite. The current paper carries out a trajectory-based analysis of gradient descent for general deep linear neural networks, covering the residual setting of BID3, as well as many more settings that better match practical deep learning. Our analysis draws upon the trajectory characterization of BID1 for gradient flow (infinitesimally small learning rate), together with significant new ideas necessitated due to discrete updates. Ultimately, we show that when minimizing 2 loss of a deep linear network over a whitened dataset, gradient descent converges to the global minimum, at a linear rate, provided that the following conditions hold: (i) the dimensions of hidden layers are greater than or equal to the minimum between those of the input and output; (ii) layers are initialized to be approximately balanced (see Definition 1) -this is met under commonplace near-zero, as well as residual (identity) initializations; and (iii) the initial loss is smaller than any loss obtainable with rank deficiencies -this condition will hold with probability close to 0.5 if the output dimension is 1 (scalar regression) and standard (random) near-zero initialization is employed. Our applies to networks with arbitrary depth and input/output dimensions, as well as any configuration of hidden layer widths that does not force rank deficiency (i.e. that meets condition (i)). The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the case of scalar regression, they are met with constant probability under a random initialization scheme. We are not aware of any similarly general analysis for efficient convergence of gradient descent to global minimum in deep learning. The remainder of the paper is organized as follows. In Section 2 we present the problem of gradient descent training a deep linear neural network by minimizing the 2 loss over a whitened dataset. Section 3 formally states our assumptions, and presents our convergence analysis. Key ideas brought forth by our analysis are demonstrated empirically in Section 4. Section 5 gives a review of relevant literature, including a detailed comparison of our against those of BID3. Finally, Section 6 concludes. We denote by v the Euclidean norm of a vector v, and by A F the Frobenius norm of a matrix A.We are given a training set {(x (i), y (i) )} m i=1 ⊂ R dx × R dy, and would like to learn a hypothesis (predictor) from a parametric family H:= {h θ : R dx → R dy | θ ∈ Θ} by minimizing the 2 loss: DISPLAYFORM0 When the parametric family in question is the class of linear predictors, i.e. H = {x → W x | W ∈ R dy×dx}, the training loss may be written as L(W) = and Y ∈ R dy×m are matrices whose columns hold instances and labels respectively. Suppose now that the dataset is whitened, i.e. has been transformed such that the empirical (uncentered) covariance matrix for instances -Λ xx:= 1 m XX ∈ R dx×dx -is equal to identity. Standard calculations (see Appendix A) show that in this case: DISPLAYFORM1 where Λ yx:= 1 m Y X ∈ R dy×dx is the empirical (uncentered) cross-covariance matrix between instances and labels, and c is a constant (that does not depend on W). Denoting Φ:= Λ yx for brevity, we have that for linear models, minimizing 2 loss over whitened data is equivalent to minimizing the squared Frobenius distance from a target matrix Φ: DISPLAYFORM2 Our interest in this work lies on linear neural networks -fully-connected neural networks with linear activation. A depth-N (N ∈ N) linear neural network with hidden widths d 1,..., d N −1 ∈ N corresponds to the parametric family of hypotheses H:= {x → W N W N −1 · · · W 1 x | W j ∈ R dj ×dj−1, j = 1, . . ., N}, where d 0:= d x, d N:= d y. Similarly to the case of a (directly parameterized) linear predictor (Equation), with a linear neural network, minimizing 2 loss over whitened data can be cast as squared Frobenius approximation of a target matrix Φ: DISPLAYFORM3 Note that the notation L N (·) is consistent with that of Equation FORMULA2, as a network with depth N = 1 precisely reduces to a (directly parameterized) linear model. We focus on studying the process of training a deep linear neural network by gradient descent, i.e. of tackling the optimization problem in Equation FORMULA3 by iteratively applying the following updates: DISPLAYFORM4 where η > 0 is a configurable learning rate. In the case of depth N = 1, the training problem in Equation FORMULA3 is smooth and strongly convex, thus it is known (cf. BID5) that with proper choice of η, gradient descent converges to global minimum at a linear rate. In contrast, for any depth greater than 1, Equation comprises a fundamentally non-convex program, and the convergence properties of gradient descent are highly non-trivial. Apart from the case N = 2 (shallow network), one cannot hope to prove convergence via landscape arguments, as the strict saddle property is provably violated (see Section 1). We will see in Section 3 that a direct analysis of the trajectories taken by gradient descent can succeed in this arena, providing a guarantee for linear rate convergence to global minimum. We close this section by introducing additional notation that will be used in our analysis. For an arbitrary matrix A, we denote by σ max (A) and σ min (A) its largest and smallest (respectively) singular values. 2 For d ∈ N, we use I d to signify the identity matrix in R d×d. Given weights W 1,..., W N of a linear neural network, we let W 1:N be the direct parameterization of the end-to-end linear mapping realized by the network, i.e. W 1: DISPLAYFORM5, meaning the loss associated with a depth-N network is equal to the loss of the corresponding end-to-end linear model. In the context of gradient descent, we will oftentimes use (t) as shorthand for the loss at iteration t: DISPLAYFORM6 In this section we establish convergence of gradient descent for deep linear neural networks (Equations and) by directly analyzing the trajectories taken by the algorithm. We begin in Subsection 3.1 with a presentation of two concepts central to our analysis: approximate balancedness and deficiency margin. These facilitate our main convergence theorem, delivered in Subsection 3.2. We conclude in Subsection 3.3 by deriving a convergence guarantee that holds with constant probability over a random initialization. In our context, the notion of approximate balancedness is formally defined as follows:2 If A ∈ R d×d, σmin(A) stands for the min{d, d}-th largest singular value. Recall that singular values are always non-negative. Definition 1. For δ ≥ 0, we say that the matrices W j ∈ R dj ×dj−1, j=1,..., N, are δ-balanced if: DISPLAYFORM0 Note that in the case of 0-balancedness, i.e. W j+1 W j+1 = W j W j, ∀j ∈ {1, . . ., N − 1}, all matrices W j share the same set of non-zero singular values. Moreover, as shown in the proof of Theorem 1 in BID1, this set is obtained by taking the N -th root of each non-zero singular value in the end-to-end matrix W 1:N. We will establish approximate versions of these facts for δ-balancedness with δ > 0, and admit their usage by showing that if the weights of a linear neural network are initialized to be approximately balanced, they will remain that way throughout the iterations of gradient descent. The condition of approximate balancedness at initialization is trivially met in the special case of linear residual networks DISPLAYFORM1 Moreover, as Claim 2 in Appendix B shows, for a given δ > 0, the customary initialization via random Gaussian distribution with mean zero leads to approximate balancedness with high probability if the standard deviation is sufficiently small. The second concept we introduce -deficiency margin -refers to how far a ball around the target is from containing rank-deficient (i.e. low rank) matrices. Definition 2. Given a target matrix Φ ∈ R d N ×d0 and a constant c > 0, we say that a matrix W ∈ R d N ×d0 has deficiency margin c with respect to Φ if: DISPLAYFORM2 The term "deficiency margin" alludes to the fact that if Equation holds, every matrix W whose distance from Φ is no greater than that of W, has singular values c-bounded away from zero:Claim 1. Suppose W has deficiency margin c with respect to Φ. Then, any matrix W (of same size as Φ and W) for which DISPLAYFORM3 Proof. Our proof relies on the inequality σ min (A+B) ≥ σ min (A)−σ max (B) -see Appendix D.1.We will show that if the weights W 1,..., W N are initialized such that (they are approximately balanced and) the end-to-end matrix W 1:N has deficiency margin c > 0 with respect to the target Φ, convergence of gradient descent to global minimum is guaranteed. 4 Moreover, the convergence will outpace a particular rate that gets faster when c grows larger. This suggests that from a theoretical perspective, it is advantageous to initialize a linear neural network such that the end-to-end matrix has a large deficiency margin with respect to the target. Claim 3 in Appendix B provides information on how likely deficiency margins are in the case of a single output model (scalar regression) subject to customary zero-centered Gaussian initialization. It shows in particular that if the standard deviation of the initialization is sufficiently small, the probability of a deficiency margin being met is close to 0.5; on the other hand, for this deficiency margin to have considerable magnitude, a non-negligible standard deviation is required. Taking into account the need for both approximate balancedness and deficiency margin at initialization, we observe a delicate trade-off under the common setting of Gaussian perturbations around zero: if the standard deviation is small, it is likely that weights be highly balanced and a deficiency margin be met; however overly small standard deviation will render high magnitude for the deficiency margin improbable, and therefore fast convergence is less likely to happen; on the opposite end, large standard deviation jeopardizes both balancedness and deficiency margin, putting the entire convergence at risk. This trade-off is reminiscent of empirical phenomena in deep learning, by 3 Note that deficiency margin c > 0 with respect to Φ implies σmin(Φ) > 0, i.e. Φ has full rank. Our analysis can be extended to account for rank-deficient Φ by replacing σmin(Φ) in Equation with the smallest positive singular value of Φ, and by requiring that the end-to-end matrix W1:N be initialized such that its left and right null spaces coincide with those of Φ. Relaxation of this requirement is a direction for future work. 4 In fact, a deficiency margin implies that all critical points in the respective sublevel set (set of points with smaller loss value) are global minima. This however is far from sufficient for proving convergence, as sublevel sets are unbounded, and the loss landscape over them is non-convex and non-smooth. Indeed, we show in Appendix C that deficiency margin alone is not enough to ensure convergence -without approximate balancedness, the lack of smoothness can cause divergence. which small initialization can bring forth efficient convergence, while if exceedingly small, rate of convergence may plummet ("vanishing gradient problem"), and if made large, divergence becomes inevitable ("exploding gradient problem"). The common resolution of residual connections is analogous in our context to linear residual networks, which ensure perfect balancedness, and allow large deficiency margin if the target is not too far from identity. Using approximate balancedness (Definition 1) and deficiency margin (Definition 2), we present our main theorem -a guarantee for linear convergence to global minimum: Theorem 1. Assume that gradient descent is initialized such that the end-to-end matrix W 1:N has deficiency margin c > 0 with respect to the target Φ, and the weights DISPLAYFORM0. Suppose also that the learning rate η meets: DISPLAYFORM1 Then, for any > 0 and: DISPLAYFORM2 the loss at iteration T of gradient descent -(T) -is no greater than. The assumptions made in Theorem 1 -approximate balancedness and deficiency margin at initialization -are both necessary, in the sense that violating any one of them may lead to convergence failure. We demonstrate this in Appendix C. In the special case of linear residual networks (uniform dimensions and identity initialization), a sufficient condition for the assumptions to be met is that the target matrix have (Frobenius) distance less than 0.5 from identity. This strengthens one of the central in BID3 (see Section 5). For a setting of random near-zero initialization, we present in Subsection 3.3 a scheme that, when the output dimension is 1 (scalar regression), ensures assumptions are satisfied (and therefore gradient descent efficiently converges to global minimum) with constant probability. It is an open problem to fully analyze gradient descent under the common initialization scheme of zero-centered Gaussian perturbations applied to each layer independently. We treat this scenario in Appendix B, providing quantitative concerning the likelihood of each assumption (approximate balancedness or deficiency margin) being met individually. However the question of how likely it is that both assumptions be met simultaneously, and how that depends on the standard deviation of the Gaussian, is left for future work. An additional point to make is that Theorem 1 poses a structural limitation on the linear neural network. Namely, it requires the dimension of each hidden layer (d i, i = 1, . . ., N −1) to be greater than or equal to the minimum between those of the input (d 0) and output (d N). Indeed, in order for the initial end-to-end matrix W 1:N to have deficiency margin c > 0, it must (by Claim 1) have full rank, and this is only possible if there is no intermediate dimension DISPLAYFORM0 We make no other assumptions on network architecture (depth, input/output/hidden dimensions). The cornerstone upon which Theorem 1 rests is the following lemma, showing non-trivial descent whenever σ min (W 1:N) is bounded away from zero:Lemma 1. Under the conditions of Theorem 1, we have that for every t = 0, 1, 2,...: DISPLAYFORM0 5 Note that the term dL 1 dW (W1:N (t)) below stands for the gradient of L 1 (·) -a convex loss over (directly parameterized) linear models (Equation FORMULA2) -at the point W1:N (t) -the end-to-end matrix of the network at iteration t. It is therefore (see Equation FORMULA6) non-zero anywhere but at a global minimum. Proof of Lemma 1 (in idealized setting; for complete proof see Appendix D.2). We prove the lemma here for the idealized setting of perfect initial balancedness (δ = 0): DISPLAYFORM1 and infinitesimally small learning rate (η → 0 +) -gradient flow: DISPLAYFORM2 where τ is a continuous time index, and dot symbol (inẆ j (τ)) signifies derivative with respect to time. The complete proof, for the realistic case of approximate balancedness and discrete updates (δ, η > 0), is similar but much more involved, and appears in Appendix D.2.Recall that (t) -the objective value at iteration t of gradient descent -is equal to L 1 (W 1:N (t)) (see Equation FORMULA6). Accordingly, for the idealized setting in consideration, we would like to show: DISPLAYFORM3 We will see that a stronger version of Equation FORMULA1 holds, namely, one without the 1/2 factor (which only appears due to discretization).By (Theorem 1 and Claim 1 in) BID1, the weights W 1 (τ),..., W N (τ) remain balanced throughout the entire optimization, and that implies the end-to-end matrix W 1:N (τ) moves according to the following differential equation: DISPLAYFORM4 where vec(A), for an arbitrary matrix A, stands for vectorization in column-first order, and P W 1:N (τ) is a positive semidefinite matrix whose eigenvalues are all greater than or equal to DISPLAYFORM5 Taking the derivative of L 1 (W 1:N (τ)) with respect to time, we obtain the sought-after Equation (with no 1/2 factor): DISPLAYFORM6 The first transition here (equality) is an application of the chain rule; the second (equality) plugs in Equation FORMULA1; the third (inequality) from the fact that the eigenvalues of the symmetric matrix P W 1:N (τ) are no smaller than σ min (W 1:N (τ)) 2(N −1)/N (recall that · stands for Euclidean norm); and the last (equality) is trivial -A F = vec(A) for any matrix A.With Lemma 1 established, the proof of Theorem 1 readily follows: DISPLAYFORM7 Plugging this into Equation while recalling that (t) = L 1 (W 1:N (t)) (Equation FORMULA6), we have (by Lemma 1) that for every t = 0, 1, 2,...: DISPLAYFORM8 Published as a conference paper at ICLR 2019Since the coefficients 1 − η · σ min (W 1:N (t)) 2(N −1) N are necessarily non-negative (otherwise would contradict non-negativity of L 1 (·)), we may unroll the inequalities, obtaining: DISPLAYFORM9 Now, this in particular means that for every t = 0, 1, 2,...: DISPLAYFORM10 Deficiency margin c of W 1:N along with Claim 1 thus imply σ min W 1:N (t) ≥ c, which when inserted back into Equation yields, for every t = 1, 2, 3,...: DISPLAYFORM11 is obviously non-negative, and it is also no greater than 1 (otherwise would contradict non-negativity of L 1 (·)). We may therefore incorporate the inequality FORMULA1: DISPLAYFORM12 DISPLAYFORM13 Recalling again that (t) = L 1 (W 1:N (t)) (Equation FORMULA6), we conclude the proof. We define the following procedure, balanced initialization, which assigns weights randomly while ensuring perfect balancedness: DISPLAYFORM0.., N, assigns these weights as follows: DISPLAYFORM1 (ii) Take singular value decomposition A = U ΣV, where DISPLAYFORM2 where the symbol " " stands for equality up to zero-valued padding. DISPLAYFORM3 The concept of balanced initialization, together with Theorem 1, leads to a guarantee for linear convergence (applicable to output dimension 1 -scalar regression) that holds with constant probability over the randomness in initialization:Theorem 2. For any constant 0 < p < 1/2, there are constants d 0, a > 0 8 such that the following holds. Assume d N = 1, d 0 ≥ d 0, and that the weights W 1,..., W N are subject to balanced initialization (Procedure 1) such that the entries in W 1:N are independent zero-centered Gaussian perturbations with standard deviation s ≤ Φ 2 / ad 2 0. Suppose also that we run gradient 6 These assignments can be accomplished since min{d1, . . ., dN−1} ≥ min{d0, dN}. 7 By design W1:N = A and W j+1 Wj+1 = WjW j, ∀j ∈ {1, . . ., N −1} -these properties are actually all we need in Theorem 2, and step (iii) in Procedure 1 can be replaced by any assignment that meets them.. Then, with probability at least p over the random initialization, we have that for every > 0 and: DISPLAYFORM4 the loss at iteration T of gradient descent -(T) -is no greater than.Proof. See Appendix D.3. Balanced initialization (Procedure 1) possesses theoretical advantages compared with the customary layer-wise independent scheme -it allowed us to derive a convergence guarantee that holds with constant probability over the randomness of initialization (Theorem 2). In this section we present empirical evidence suggesting that initializing with balancedness may be beneficial in practice as well. For conciseness, some of the details behind our implementation are deferred to Appendix E.We began by experimenting in the setting covered by our analysis -linear neural networks trained via gradient descent minimization of 2 loss over whitened data. The dataset chosen for the experiment was UCI Machine Learning Repository's "Gas Sensor Array Drift at Different Concentrations" . Specifically, we used the dataset's "Ethanol" problem -a scalar regression task with 2565 examples, each comprising 128 features (one of the largest numeric regression tasks in the repository). Starting with the customary initialization of layer-wise independent random Gaussian perturbations centered at zero, we trained a three layer network (N = 3) with hidden widths (d 1, d 2) set to 32, and measured the time (number of iterations) it takes to converge (reach training loss within = 10 −5 from optimum) under different choices of standard deviation for the initialization. To account for the possibility of different standard deviations requiring different learning rates (values for η), we applied, for each standard deviation independently, a grid search over learning rates, and recorded the one that led to fastest convergence. The of this test is presented in FIG2 (a). As can be seen, there is a range of standard deviations that leads to fast convergence (a few hundred iterations or less), below and above which optimization decelerates by orders of magnitude. This accords with our discussion at the end of Subsection 3.3, by which overly small initialization ensures approximate balancedness (small δ; see Definition 1) but diminishes deficiency margin (small c; see Definition 2) -"vanishing gradient problem" -whereas large initialization hinders both approximate balancedness and deficiency margin -"exploding gradient problem". In that regard, as a sanity test for the validity of our analysis, in a case where approximate balancedness is met at initialization (small standard deviation), we measured its persistence throughout optimization. As FIG2 (c) shows, our theoretical findings manifest themselves here -trajectories of gradient descent indeed preserve weight balancedness. In addition to a three layer network, we also evaluated a deeper, eight layer model (with hidden widths identical to the former -N = 8, d 1 = · · · = d 7 = 32). In particular, using the same experimental protocol as above, we measured convergence time under different choices of standard deviation for the initialization. FIG2 (a) displays the of this test alongside that of the three layer model. As the figure shows, transitioning from three layers to eight aggravated the instability with respect to initialization -there is now a narrow band of standard deviations that lead to convergence in reasonable time, and outside of this band convergence is extremely slow, to the point where it does not take place within the duration we allowed (10 6 iterations). From the perspective of our analysis, a possible explanation for the aggravation is as follows: under layer-wise independent initialization, the magnitude of the end-to-end matrix W 1:N depends on the standard deviation in a manner that is exponential in depth, thus for large depths the range of standard deviations that lead to moderately sized W 1:N (as required for a deficiency margin) is limited, and within this range, there may not be many standard deviations small enough to ensure approximate balancedness. The procedure of balanced initialization (Procedure 1) circumvents these difficulties -it assigns W 1:N directly (no exponential dependence on depth), and distributes its content between the individual weights W 1,..., W N in a perfectly balanced fashion. Rerunning the experiment of FIG2 (a) with this initialization replacing the customary layer-wise scheme (using same experimental protocol), we obtained the shown in FIG2 −3, this plot shows degree of balancedness (minimal δ satisfying W j+1 Wj+1 − WjW j F ≤ δ, ∀j ∈ {1, . . ., N − 1}) against magnitude of weights (minj=1,...,N WjW j F) throughout optimization. Notice that approximate balancedness persists under gradient descent, in line with our theoretical analysis. (d) Convergence of stochastic gradient descent training the fully-connected non-linear (ReLU) neural network of the MNIST tutorial built into TensorFlow (details in text). Customary layer-wise independent and balanced initializations -both based on Gaussian perturbations centered at zero -are evaluated, with varying standard deviations. For each configuration 10 epochs of optimization are run, followed by measurement of the training loss. Notice that although our theoretical analysis does not cover non-linear activation, softmax-cross-entropy loss and stochastic optimization, the of balanced initialization leading to improved convergence carries over to this setting. As a final experiment, we evaluated the effect of balanced initialization in a setting that involves non-linear activation, softmax-cross-entropy loss and stochastic optimization (factors not accounted for by our analysis). For this purpose, we turned to the MNIST tutorial built into TensorFlow BID0, 9 which comprises a fully-connected neural network with two hidden layers (width 128 followed by 32) and ReLU activation , trained through stochastic gradient descent (over softmax-cross-entropy loss) with batch size 100, initialized via customary layer-wise independent Gaussian perturbations centered at zero. While keeping the learning rate at its default value 0.01, we varied the standard deviation of initialization, and for each value measured the training loss after 10 epochs. 10 We then replaced the original (layer-wise independent) initialization with a balanced initialization based on Gaussian perturbations centered at zero (latter was implemented per Procedure 1, disregarding non-linear activation), and repeated the process. The of this experiment are shown in FIG2. Although our theoretical analysis does not cover non-linear activation, softmax-cross-entropy loss or stochasticity in optimization, its of balanced initialization leading to improved (faster and more stable) convergence carried over to such setting. Theoretical study of gradient-based optimization in deep learning is a highly active area of research. As discussed in Section 1, a popular approach is to show that the objective landscape admits the properties of no poor local minima and strict saddle, which, by;; , ensure convergence to global minimum. Many works, both classic (e.g. BID2) and recent (e.g. BID9 ; Kawaguchi FORMULA1 FORMULA1), have focused on the validity of these properties in different deep learning settings. Nonetheless, to our knowledge, the success of landscape-driven analyses in formally proving convergence to global minimum for a gradient-based algorithm, has thus far been limited to shallow (two layer) models only (e.g. ; ; BID12).An alternative to the landscape approach is a direct analysis of the trajectories taken by the optimizer. Various papers (e.g. BID6 FORMULA1) have recently adopted this strategy, but their analyses only apply to shallow models. In the context of linear neural networks, deep (three or more layer) models have also been treated -cf. Saxe et al. FORMULA1 and BID1, from which we draw certain technical ideas for proving Lemma 1. However these treatments all apply to gradient flow (gradient descent with infinitesimally small learning rate), and thus do not formally address the question of computational efficiency. To our knowledge, BID3 is the only existing work rigorously proving convergence to global minimum for a conventional gradient-based algorithm training a deep model. This work is similar to ours in the sense that it also treats linear neural networks trained via minimization of 2 loss over whitened data, and proves linear convergence (to global minimum) for gradient descent. It is more limited in that it only covers the subclass of linear residual networks, i.e. the specific setting of uniform width across all layers (d 0 = · · · = d N) along with identity initialization. We on the other hand allow the input, output and hidden dimensions to take on any configuration that avoids "bottlenecks" (i.e. admits min{d 1, . . . DISPLAYFORM0, and from initialization require only approximate balancedness (Definition 1), supporting many options beyond identity. In terms of the target matrix Φ, BID3 treats two separate scenarios:11 (i) Φ is symmetric and positive definite; and (ii) Φ is within distance 1/10e from identity.12 Our analysis does not fully account for scenario (i), which seems to be somewhat of a singularity, where all layers are equal to each other throughout optimization (see proof of Theorem 2 in BID3). We do however provide a strict generalization of scenario (ii) -our assumption of deficiency margin (Definition 2), in the setting of linear residual networks, is met if the distance between target and identity is less than 0.5. For deep linear neural networks, we have rigorously proven convergence of gradient descent to global minima, at a linear rate, provided that the initial weight matrices are approximately balanced and the initial end-to-end matrix has positive deficiency margin. The applies to networks with arbitrary depth, and any configuration of input/output/hidden dimensions that supports full rank, i.e. in which no hidden layer has dimension smaller than both the input and output. Our assumptions on initialization -approximate balancedness and deficiency margin -are both necessary, in the sense that violating any one of them may lead to convergence failure, as we demonstrated explicitly. Moreover, for networks with output dimension 1 (scalar regression), we have shown that a balanced initialization, i.e. a random choice of the end-to-end matrix followed by a balanced partition across all layers, leads assumptions to be met, and thus convergence to take place, with constant probability. Rigorously proving efficient convergence with significant probability under customary layer-wise independent initialization remains an open problem. The recent work of suggests that this may not be possible, as at least in some settings, the number of iterations required for convergence is exponential in depth with overwhelming probability. This negative , a theoretical manifestation of the "vanishing gradient problem", is circumvented by balanced initialization. Through simple experiments we have shown that the latter can lead to favorable convergence in deep learning practice, as it does in theory. Further investigation of balanced initialization, including development of variants for convolutional layers, is regarded as a promising direction for future research. The analysis in this paper uncovers special properties of the optimization landscape in the vicinity of gradient descent trajectories. We expect similar ideas to prove useful in further study of gradient descent on non-convex objectives, including training losses of deep non-linear neural networks. A 2 LOSS OVER WHITENED DATA Recall the 2 loss of a linear predictor W ∈ R dy×dx as defined in Section 2: DISPLAYFORM0 By definition, when data is whitened, Λ xx is equal to identity, yielding: For approximate balancedness we have the following claim, which shows that it becomes more and more likely the smaller the standard deviation of initialization is: DISPLAYFORM1 Claim 2. Assume all entries in the matrices W j ∈ R dj ×dj−1, j = 1,..., N, are drawn independently at random from a Gaussian distribution with mean zero and standard deviation s > 0. Then, for any δ > 0, the probability of W 1,..., W N being δ-balanced is at least max{0, 1 − 10δ DISPLAYFORM2 In terms of deficiency margin, the claim below treats the case of a single output model (scalar regression), and shows that if the standard deviation of initialization is sufficiently small, with probability close to 0.5, a deficiency margin will be met. However, for this deficiency margin to meet a chosen threshold c, the standard deviation need be sufficiently large. Claim 3. There is a constant C 1 > 0 such that the following holds. Consider the case where DISPLAYFORM3 13 and suppose all entries in the matrices W j ∈ R dj ×dj−1, j = 1,..., N, are drawn independently at random from a Gaussian distribution with mean zero, whose standard deviation s > 0 is small with respect to the target, i.e. DISPLAYFORM4, the probability of the end-to-end matrix W 1:N having deficiency margin c with respect to Φ is at least 0.49 if: DISPLAYFORM5 Proof. See Appendix D.5. 13 The requirement d0 ≥ 20 is purely technical, designed to simplify expressions in the claim. 14 The probability 0.49 can be increased to any p < 1/2 by increasing the constant 10 5 in the upper bounds for s and c.15 It is not difficult to see that the latter threshold is never greater than the upper bound for s, thus sought-after standard deviations always exist. In this appendix we show that the assumptions on initialization facilitating our main convergence (Theorem 1) -approximate balancedness and deficiency margin -are both necessary, by demonstrating cases where violating each of them leads to convergence failure. This accords with widely observed empirical phenomena, by which successful optimization in deep learning crucially depends on careful initialization (cf.).Claim 4 below shows 16 that if one omits from Theorem 1 the assumption of approximate balancedness at initialization, no choice of learning rate can guarantee convergence:Claim 4. Assume gradient descent with some learning rate η > 0 is a applied to a network whose depth N is even, and whose input, output and hidden dimensions d 0,..., d N are all equal to some d ∈ N. Then, there exist target matrices Φ such that the following holds. For any c with 0 < c < σ min (Φ), there are initializations for which the end-to-end matrix W 1:N has deficiency margin c with respect to Φ, and yet convergence will fail -objective will never go beneath a positive constant. Proof. See Appendix D.6.In terms of deficiency margin, we provide (by adapting Theorem 4 in BID3) a different, somewhat stronger -there exist settings where initialization violates the assumption of deficiency margin, and despite being perfectly balanced, leads to convergence failure, for any choice of learning rate: Claim 5. Consider a network whose depth N is even, and whose input, output and hidden dimensions d 0,..., d N are all equal to some d ∈ N. Then, there exist target matrices Φ for which there are non-stationary initializations W 1,..., W N that are 0-balanced, and yet lead gradient descent, under any learning rate, to fail -objective will never go beneath a positive constant. Proof. See Appendix D.7. We introduce some additional notation here in addition to the notation specified in Section 2. We use A σ to denote the spectral norm (largest singular value) of a matrix A, and sometimes v 2 as an alternative to v -the Euclidean norm of a vector v. Recall that for a matrix A, vec(A) is its vectorization in column-first order. We let F (·) denote the cumulative distribution function of the standard normal distribution, i.e. F (x) = DISPLAYFORM0 To simplify the presentation we will oftentimes use W as an alternative (shortened) notation for W 1:N -the end-to-end matrix of a linear neural network. We will also use L(·) as shorthand for L 1 (·) -the loss associated with a (directly parameterized) linear model, i.e. L(W):= DISPLAYFORM1 F. Therefore, in the context of gradient descent training a linear neural network, the following expressions all represent the loss at iteration t: DISPLAYFORM2 Also, for weights W j ∈ R dj ×dj−1, j = 1,..., N of a linear neural network, we generalize the notation W 1:N, and define W j:j:= W j W j −1 · · · W j for every 1 ≤ j ≤ j ≤ N. Note that W j:j = W j W j+1 · · · W j. Then, by a simple gradient calculation, the gradient descent updates can be written as DISPLAYFORM3 where we define W 1:0 (t):= I d0 and W N +1:N (t):= I d N for completeness. 16 For simplicity of presentation, the claim treats the case of even depth and uniform dimension across all layers. It can easily be extended to account for arbitrary depth and input/output/hidden dimensions. 17 This statement becomes trivial if one allows initialization at a suboptimal stationary point, e.g. Wj = 0, j = 1,..., N. Claim 5 rules out such trivialities by considering only non-stationary initializations. Finally, recall the standard definition of the tensor product of two matrices (also known as the Kronecker product): for matrices A ∈ R m A ×n A, B ∈ R m B ×n B, their tensor product A ⊗ B ∈ R m A m B ×n A n B is defined as DISPLAYFORM4 where a i,j is the element in the i-th row and j-th column of A. Proof. Recall that for any matrices A and B of compatible sizes σ min (A + B) ≥ σ min (A) − σ max (B), and that the Frobenius norm of a matrix is always lower bounded by its largest singular value . Using these facts, we have: DISPLAYFORM0 To prove Lemma 1, we will in fact prove a stronger , Lemma 2 below, which states that for each iteration t, in addition to being satisfied, certain other properties are also satisfied, namely: (i) the weight matrices W 1 (t),..., W N (t) are 2δ-balanced, and (ii) W 1 (t),..., W N (t) have bounded spectral norms. Lemma 2. Suppose the conditions of Theorem 1 are satisfied. Then for all t ∈ N ∪ {0}, DISPLAYFORM0 First we observe that Lemma 1 is an immediate consequence of Lemma 2.Proof of Lemma 1. Notice that condition B(t) of Lemma 2 for each t ≥ 1 immediately establishes the of Lemma 1 at time step t − 1. We next prove some preliminary lemmas which will aid us in the proof of Lemma 2. The first is a matrix inequality that follows from Lidskii's theorem. For a matrix A, let Sing(A) denote the rectangular diagonal matrix of the same size, whose diagonal elements are the singular values of A arranged in non-increasing order (starting from the position). Using Lemma 3, we get: Lemma 4. Suppose D 1, D 2 ∈ R d×d are non-negative diagonal matrices with non-increasing values along the diagonal and O ∈ R d×d is an orthogonal matrix. Suppose that D 1 − OD 2 O F ≤, for some > 0. Then: DISPLAYFORM0 Proof. Since D 1 and OD 2 O T are both symmetric positive semi-definite matrices, their singular values are equal to their eigenvalues. Moreover, the singular values of D 1 are simply its diagonal elements and the singular values of OD 2 O T are simply the diagonal elements of D 2. Thus by Lemma 3 we get that DISPLAYFORM1, and by the triangle inequality it follows that DISPLAYFORM2 DISPLAYFORM3 and that for some ν > 0, M > 0, the matrices DISPLAYFORM4 and for 1 ≤ j ≤ N, W j σ ≤ M. Then, for 1 ≤ j ≤ N, DISPLAYFORM5 and DISPLAYFORM6 Moreover, if σ min denotes the minimum singular value of W 1:N, σ 1,min denotes the minimum singular value of W 1 and σ N,min denotes the minimum singular value of W N, then DISPLAYFORM7 Proof. For 1 ≤ j ≤ N, let us write the singular value decomposition of W j as W j = U j Σ j V j, where U j ∈ R dj ×dj and V j ∈ R dj−1×dj−1 are orthogonal matrices and Σ j ∈ R dj ×dj−1 is diagonal. We may assume without loss of generality that the singular values of W j are non-increasing along the diagonal of Σ j. Then we can write as DISPLAYFORM8 Since the Frobenius norm is invariant to orthogonal transformations, we get that DISPLAYFORM9 By Lemma 4, we have that DISPLAYFORM10 We may rewrite the latter of these two inequalities as DISPLAYFORM11 For matrices A, B, we have that AB F ≤ A σ · B F. Therefore, for j + 1 ≤ i ≤ N, we have that DISPLAYFORM12 We now argue that DISPLAYFORM13 verifying the case k = 1. To see the general case, since square diagonal matrices commute, we have that DISPLAYFORM14 By the triangle inequality, we then have that DISPLAYFORM0 By an identical argument (formally, by replacing W j with W N −j+1), we get that DISPLAYFORM1 and FORMULA2 verify FORMULA1 and FORMULA1, respectively, so it only remains to verify.Letting j = 1 in, we get DISPLAYFORM2 Let us write the eigendecomposition of W 1:N W 1:N with an orthogonal eigenbasis as W 1:N W 1:N = U ΣU, where Σ is diagonal with its (non-negative) elements arranged in non-increasing order and U is orthogonal. We can write the left hand side of FORMULA1 DISPLAYFORM3 By Lemma 4, we have that DISPLAYFORM4 Recall that W ∈ R d N ×d0. Suppose first that d N ≤ d 0. Let σ min denote the minimum singular value of W 1:N (so that σ 2 min is the element in the (d N, d N DISPLAYFORM5, and σ N,min denote the minimum singular value (i.e. diagonal element) of Σ N, which lies in the DISPLAYFORM6 By an identical argument using FORMULA2, we get that, in the case that d 0 ≤ d N, if σ 1,min denotes the minimum singular value of Σ 1, then DISPLAYFORM7 (Notice that we have used the fact that the nonzero eigenvalues of DISPLAYFORM8 Proof. For 1 ≤ j ≤ N, let us write the singular value decomposition of W j as W j = U j Σ j V j, where the singular values of W j are decreasing along the main diagonal of Σ j . By Lemma 4, we have that for DISPLAYFORM9 Write M = max 1≤j≤N W j σ = max 1≤j≤N Σ j σ . By the above we have that DISPLAYFORM10 Let the singular value decomposition of W 1:N be denoted by W 1:N = U ΣV, so that Σ σ ≤ C. Then by of Lemma 5 and Lemma 4 (see also FORMULA2, where the same argument was used), we have that DISPLAYFORM11 Now recall that ν is chosen so that ν ≤ C 2/N 30·N 2. Suppose for the purpose of contradiction that there is some j such that W j W j σ > 2 1/N C 2/N. Then it must be the case that DISPLAYFORM12 where we have used that 2 1/N − (5/4) 1/N ≥ 1 30N for all N ≥ 2, which follows by considering the Laurent series exp(1/z) = DISPLAYFORM13 We now rewrite inequality as DISPLAYFORM14 Next, using FORMULA2 and (1 + 1/x) x ≤ e for all x > 0, DISPLAYFORM15 Since DISPLAYFORM16, we get by combining FORMULA2 and FORMULA2 that DISPLAYFORM17 and since 1 − e/20 > 1/(5/4), it follows that Σ N Σ N σ < (5/4) 1/N C 2/N, which contradicts. It follows that for all 1 ≤ j ≤ N, W j W j σ ≤ 2 1/N C 2/N. The of the lemma then follows from the fact that DISPLAYFORM18 Lemma 7 below states that if certain conditions on W 1 (t),..., W N (t) are met, the sought-after descent -Equation -will take place at iteration t. We will later show (by induction) that the required conditions indeed hold for every t, thus the descent persists throughout optimization. The proof of Lemma 7 is essentially a discrete, single-step analogue of the continuous proof for Lemma 1 (covering the case of gradient flow) given in Section 3. Lemma 7. Assume the conditions of Theorem 1. Moreover, suppose that for some t, the matrices W 1 (t),..., W N (t) and the end-to-end matrix W (t):= W 1:N (t) satisfy the following properties: DISPLAYFORM0 Then, after applying a gradient descent update we have that DISPLAYFORM1 Proof. For simplicity write M = (4 Φ F) 1/N and B = Φ F. We first claim that DISPLAYFORM2 Since c ≤ σ min, for to hold it suffices to have DISPLAYFORM3 which is guaranteed by.Next, we claim that DISPLAYFORM4 The second inequality above is trivial, and for the first to hold, since c ≤ Φ F, it suffices to take DISPLAYFORM5 which is guaranteed by the definition of δ in Theorem 1.Next we continue with the rest of the proof. It follows from that DISPLAYFORM6 where denotes higher order terms in η. We now bound the Frobenius norm of . To do this, note that since DISPLAYFORM7 18 Here, for matrices A1,..., AK such that AK AK−1 · · · A1 is defined, we write DISPLAYFORM8 where the last inequality uses ηM N −2 BN ≤ 1/2, which is a consequence of. Next, by Lemma 5 with ν = 2δ, DISPLAYFORM9 Next, by standard properties of tensor product, we have that DISPLAYFORM10 Let us write eigenvalue decompositions DISPLAYFORM11 If λ D denotes the minimum diagonal element of D and λ E denotes the minimum diagonal element of E, then the minimum diagonal element of Λ is therefore at least λ It follows as a of the above inequalities that if we write DISPLAYFORM12 DISPLAYFORM13 Then we have DISPLAYFORM14 where the first inequality follows since DISPLAYFORM15 F is 1-smooth as a function of W. Next, by FORMULA2 and FORMULA3, DISPLAYFORM16 Thus DISPLAYFORM17 By, which bound η, 2δ, respectively, we have that DISPLAYFORM18 Proof of Lemma 2. We use induction on t, beginning with the base case t = 0. Since the weights W 1,..., W N are δ-balanced, we get that A holds automatically. To establish B, note that since W 1:N has deficiency margin c > 0 with respect to Φ, we must have DISPLAYFORM0 To show that the above implies C, we use condition A and Lemma 6 with C = 2 Φ F and ν = 2δ. By the definition of δ in Theorem 1 and since c ≤ Φ F, we have that DISPLAYFORM1 as required by Lemma 6. As A and verify the preconditions 1. and 2., respectively, of Lemma 6, it follows that for 1 verifying C and completing the proof of the base case. DISPLAYFORM2 The proof of Lemma 2 follows directly from the following inductive claims. To prove this, we use Lemma 7. We verify first that the preconditions hold. First, C(t) immediately gives condition 1. of Lemma 7. By B(t), we have that W (t) − Φ σ ≤ W (t) − Φ F ≤ Φ F, giving condition 2. of Lemma 7. A(t) immediately gives condition 3. of Lemma 7. Finally, by B(t), we have that DISPLAYFORM0, establishing B(t + 1).,..., B(t), C(t) ⇒ A(t + 1), A (t + 1). To prove this, note that for 1 ≤ j ≤ N − 1, DISPLAYFORM1 DISPLAYFORM2 By B,..., B(t), W 1:N (t) − Φ F ≤ Φ F. By the triangle inequality it then follows that W 1: DISPLAYFORM3. By Lemma 6 with C = 2 Φ F, ν = 2δ (so that is satisfied), DISPLAYFORM4 In the first inequality above, we have also used the fact that for matrices A, B such that AB is defined, AB F ≤ A σ B F. FORMULA3 gives us A (t + 1).We next establish A(t + 1). By B(i) for 0 ≤ i ≤ t, we have that DISPLAYFORM5 Using A (i) for 0 ≤ i ≤ t and summing over i gives DISPLAYFORM6 Next, by B,..., B(t), we have that L(W (i)) ≤ L(W) for i ≤ t. Since W has deficiency margin of c and by Claim 1, it then follows that σ min (W (i)) ≥ c for all i ≤ t. Therefore, by summing B,..., B(t), DISPLAYFORM7 Therefore, DISPLAYFORM8 where FORMULA3 follows from the definition of η in FORMULA12, and the last equality follows from definition of δ in Theorem 1. By, it follows that DISPLAYFORM9 verifying A(t + 1). We apply Lemma 6 with ν = 2δ and C = 2 Φ F. First, the triangle inequality and B(t) give DISPLAYFORM0 verifying precondition 2. of Lemma 6. A(t) verifies condition 1. of Lemma 6, so for DISPLAYFORM1 The proof of Lemma 2 then follows by induction on t. Theorem 2 is proven by combining Lemma 8 below, which implies that the balanced initialization is likely to lead to an end-to-end matrix W 1:N with sufficiently large deficiency margin, with Theorem 1, which establishes convergence. DISPLAYFORM0 be a vector. Suppose that µ is a rotation-invariant distribution 19 over R d with a well-defined density, such that, for some 0 < < 1, DISPLAYFORM1 Then, with probability at least DISPLAYFORM2, V will have deficiency margin Φ 2 /(b 2 d) with respect to Φ. 19 Recall that a distribution on vectors V ∈ R d is rotation-invariant if the distribution of V is the same as the distribution of OV, for any orthogonal d × d matrix O. If V has a well-defined density, this is equivalent to the statement that for any r > 0, the distribution of V conditioned on V 2 = r is uniform over the sphere centered at the origin with radius r. The proof of Lemma 8 is postponed to Appendix D.5, where Lemma 8 will be restated as Lemma 16.One additional technique is used in the proof of Theorem 2, which leads to an improvement in the guaranteed convergence rate. Because the deficiency margin of W 1:N is very small, namely O(Φ 2 /d 0) (which is necessary for the theorem to maintain constant probability), at the beginning of optimization, (t) will decrease very slowly. However, after a certain amount of time, the deficiency margin of W 1:N (t) will increase to a constant, at which point the decrease of (t) will be much faster. To capture this acceleration, we apply Theorem 1 a second time, using the larger deficiency margin at the new "initialization." From a geometric perspective, we note that the matrices W 1,..., W N are very close to 0, and the point at which W j = 0 for all j is a saddle. Thus, the increase in (t) − (t + 1) over time captures the fact that the iterates FIG2,..., W N (t)) escape a saddle point. Proof of Theorem 2. Choose some a ≥ 2, to be specified later. By assumption, all entries of the end-to-end matrix at time 0, W 1:N, are distributed as independent Gaussians of mean 0 and standard deviation s ≤ Φ 2 / ad 2 0. We will apply Lemma 8 to the vector W 1:N ∈ R d0. Since its distribution is obviously rotation-invariant, in remains to show that the distribution of the norm W 1:N 2 is not too spread out. The following lemma -a direct consequence of the Chernoff bound applied to the χ 2 distribution with d 0 degrees of freedom -will give us the desired :Lemma 9 (Laurent and Massart FORMULA2, Lemma 1). Suppose that d ∈ N and V ∈ R d is a vector whose entries are i.i.d. Gaussians with mean 0 and standard deviation s. Then, for any k > 0, DISPLAYFORM0 By Lemma 9 with k = d 0 /16, we have that DISPLAYFORM1 We next use Lemma 8, with DISPLAYFORM2; note that since a ≥ 2, b 1 ≥ 1, as required by the lemma. Lemma 8 then implies that with probability at least DISPLAYFORM3 W 1:N will have deficiency margin s 2 d 0 /2 Φ 2 with respect to Φ. By the definition of balanced initialization (Procedure 1) W 1,..., W N are 0-balanced. Since 2 4 · 6144 < 10 5, our assumption on η gives DISPLAYFORM4 so that Equation FORMULA12 holds with c = DISPLAYFORM5. The conditions of Theorem 1 thus hold with probability at least that given in Equation. In such a constant probability event, by Theorem 1 (and the fact that a positive deficiency margin implies DISPLAYFORM6 2), if we choose DISPLAYFORM7 then DISPLAYFORM8 Moreover, by condition A(t 0) of Lemma 2 and the definition of δ in Theorem 1, we have, for DISPLAYFORM9 We now apply Theorem 1 again, verifying its conditions again, this time with the initialization (W 1 (t 0),..., W N (t 0)). First note that the end-to-end matrix W 1:N (t 0) has deficiency margin c = Φ 2 /2 as shown above. The learning rate η, by Equation, satisfies Equation FORMULA12 with c = Φ 2 /2. Finally, since DISPLAYFORM10 for d 0 ≥ 2, by Equation FORMULA1, the matrices W 1 (t 0),..., W N (t 0) are δ-balanced with δ = DISPLAYFORM11. Iteration t 0 thus satisfies the conditions of Theorem 1 with deficiency margin Φ 2 /2, meaning that for DISPLAYFORM12 we will have (T) ≤. Therefore, by Equations FORMULA4 and FORMULA2, to ensure that (T) ≤, we may take DISPLAYFORM13 Recall that this entire analysis holds only with the probability given in Equation. As lim d→∞ (1 − 2 exp(−d/16)) = 1 and lim a→∞ (3 − 4F (2 2/a))/2 = 1/2, for any 0 < p < 1/2, there exist a, d 0 > 0 such that for d 0 ≥ d 0, the probability given in Equation FORMULA3 is at least p. This completes the proof. In the context of the above proof, we remark that the expressions 1 − 2 exp(−d 0 /16) and (3 − 4F (2 2/a))/2 converge to their limits of 1 and 1/2, respectively, as d 0, a → ∞ quite quickly. For instance, to obtain a probability of greater than 0.25 of the initialization conditions being met, we may take d 0 ≥ 100, a ≥ 100. We first consider the probability of δ-balancedness holding between any two layers: Lemma 10. Suppose a, b, d ∈ N and A ∈ R a×d, B ∈ R d×b are matrices whose entries are distributed as i.i.d. Gaussians with mean 0 and standard deviation s. Then for k ≥ 1, DISPLAYFORM0 Proof. Note that for 1 ≤ i, j ≤ d, let X ij be the random variable (A T A − BB T) ij, so that DISPLAYFORM1 We next note that for a normal random variable Y of variance s 2 and mean 0, DISPLAYFORM2 Then FORMULA3 follows from Markov's inequality. Now the proof of Claim 2 follows from a simple union bound:Proof of Claim 2. By of Lemma 10, for each 1 DISPLAYFORM3 By the union bound, DISPLAYFORM4 and the claim follows with δ = ks 2 10d 3 max. We begin by introducing some notation. FORMULA1 ). There is an absolute constant C 0 such that the following holds. Suppose that h is a multilinear polynomial of K variables X 1,..., X K and of degree N. Suppose that X 1,..., X K are i.i.d. Gaussian. Then, for any > 0: DISPLAYFORM0 The below lemma characterizes the norm of the end-to-end matrix W 1:N following zero-centered Gaussian initialization: Lemma 12. For any constant 0 < C 2 < 1, there is an absolute constant C 1 > 0 such that the following holds. Let N, d 0,..., d N −1 ∈ N. Set d N = 1. Suppose that for 1 ≤ j ≤ N, W j ∈ R dj ×dj−1 are matrices whose entries are i.i.d. Gaussians of standard deviation s and mean 0. Then DISPLAYFORM1 2, so that f is a polynomial of degree 2N in the entries of W 1,..., W N. Notice that DISPLAYFORM2 Since each g i0 is a multilinear polynomial in W 1,..., W N, we have that DISPLAYFORM3 For any constant B 1 (whose exact value will be specified below), it follows that DISPLAYFORM4 Next, by Lemma 11, there is an absolute constant C 0 > 0 such that for any > 0, and any DISPLAYFORM5 for each i 0, it follows that DISPLAYFORM6 Next, given 0 < C 2 < 1, choose = (1 − C 2)/(2C 0 N), and B 1 = 2/(1 − C 2). Then by FORMULA4 and FORMULA4 and a union bound, we have that DISPLAYFORM7 The of the lemma then follows by taking C 1 = max DISPLAYFORM8 DISPLAYFORM9 i0, which is a Gaussian with mean 0 and standard deviation s, since O i0 2 = 1. Since O i0, O i 0 = 0 for i 0 = i 0, the covariance between any two distinct entries of W 1 O is 0. Therefore, the entries of W 1 O are independent Gaussians with mean 0 and standard deviation s, just as are the entries of W 1. For a dimension d ∈ N, radius r > 0, and DISPLAYFORM0 Proof. In BID10, it is shown that the area of a (d, 1)-hyperspherical cap of height h is given by DISPLAYFORM1, where Using that C ⊆ D, we continue with the proof. Notice the fact that C ⊆ D is equivalent to DISPLAYFORM2 DISPLAYFORM3, by the structure of C and D. Since the probability that V lands in ∂C is at DISPLAYFORM4, this lower bound applies to V landing in ∂D as well. Since all V ∈ ∂D have distance at most 1 − 1/(ad) from Φ, and since σ min (Φ) = Φ 2 = 1, it follows that for any V ∈ ∂D, V − Φ 2 ≤ σ min (Φ) − 1/(ad). Therefore, with probability of at least DISPLAYFORM5, V has deficiency margin Φ 2 /(ad) with respect to Φ.Lemma 16 (Lemma 8 restated). Let d ∈ N, d ≥ 20; b 2 > b 1 ≥ 1 be real numbers (possibly depending on d); and Φ ∈ R d be a vector. Suppose that µ is a rotation-invariant distribution over R d with a well-defined density, such that, for some 0 < < 1, DISPLAYFORM6 Then, with probability at least DISPLAYFORM7, V will have deficiency margin Φ 2 /(b 2 d) with respect to Φ.Proof. By rescaling we may assume that Φ 2 = 1 without loss of generality. Then the deficiency margin of V is equal to 1 − V − Φ 2. µ has a well-defined density, so we can setμ to be the probability density function of V 2. Since µ is rotation-invariant, we can integrate over spherical coordinates, giving DISPLAYFORM8 where the first inequlaity used Lemma 15 and the fact that the distribution of V conditioned on V 2 = r is uniform on S d (r). Proof of Claim 3. We let W ∈ R 1×d0 R d0 denote the random vector W 1:N; also let µ denote the distribution of W, so that by Lemma 13, µ is rotation-invariant. Let C 1 be the constant from Lemma 12 for C 2 = 999/1000. For some a ≥ 10 5, the standard deviation of the entries of each W j is given by DISPLAYFORM0 Then by Lemma 12, ) with respect to Φ. But a ≥ 10 5 implies that this probability is at least 0.49, and from, DISPLAYFORM1 DISPLAYFORM2 Next recall the assumption in the hypothesis that s ≥ C 1 N (c · Φ 2 /(d 1 · · · d N −1)) 1/2N. Then the deficiency margin in is at least DISPLAYFORM3 completing the proof. Proof. The target matrices Φ that will be used to prove the claim satisfy σ min (Φ) = 1. We may assume without loss of generality that c ≥ 3/4, the reason being that if a matrix has deficiency margin c with respect to Φ and c < c, it certainly has deficiency margin c with respect to Φ.We first consider the case d = 1, so that the target and all matrices are simply real numbers; we will make a slight abuse of notation in identifying 1 × 1 matrices with their unique entries. We set Φ = 1. For all choices of η, we will set the initializations W 1,..., W N so that W 1:N = c. Then A ≤ W j ≤ max{η, 1}A.We prove the following lemma by induction:Lemma 17. For each t ≥ 1, the real numbers W 1 (t),..., W N (t) all have the same sign and this sign alternates for each integer t. Moreover, there are real numbers 2 ≤ B(t) < C(t) for t ≥ 1 such that for 1 ≤ j ≤ N, B(t) ≤ |W j (t)| ≤ C(t) and ηB(t) 2N −1 ≥ 20C(t).Proof. First we claim that we may take B = min{η,1} 10A and C = max{η, 1}A. We have shown above that B ≤ W j ≤ C for all j. Next we establish that ηB (Now set B(t + 1) = 9C(t) and C(t + 1) = ηC(t) 2N −1. Since N ≥ 2, we have that ηB(t + 1) 2N −1 = η(9C(t)) 2N −1 ≥ η9 3 C(t) 2N −1 > 20ηC(t) 2N −1 = 20C(t + 1).The case that all W j (t) are negative for 1 ≤ j ≤ N is nearly identical, with the same values for B(t + 1), C(t + 1) in terms of B(t), C(t), except all W j (t + 1) will be positive. This establishes the inductive step and completes the proof of Lemma 17. For the general case where d 0 = d 1 = · · · = d N = d for some d ≥ 1, we set Φ = I d, and given c, η, we set W j to be the d × d diagonal matrix where all diagonal entries except the first one are equal to 1, and where the first diagonal entry is given by Equation FORMULA1, where A is given by Equation. It is easily verified that all entries of W j (t), 1 ≤ j ≤ N, except for the first diagonal element of each matrix, will remain constant for all t ≥ 0, and that the first diagonal elements evolve exactly as in the 1-dimensional case presented above. Therefore the loss in the d-dimensional case is equal to the loss in the 1-dimensional case, which is always greater than some positive constant. We remark that the proof of Claim 4 establishes that the loss (t):= L N (W 1 (t),..., W N (t)) grows at least exponentially in t for the chosen initialization. Such behavior, in which gradients and weights explode, indeed takes place in deep learning practice if initialization is not chosen with care. Proof. We will show that a target matrix Φ ∈ R d×d which is symmetric with at least one negative eigenvalue, along with identity initialization (W j = I d, ∀j ∈ {1, . . ., N}), satisfy the conditions of the claim. First, note that non-stationarity of initialization is met, as for any 1 ≤ j ≤ N, Lemma 18 BID3, Lemma 6). If W 1,..., W N are all initialized to identity, Φ is symmetric, Φ = U DU is a diagonalization of Φ, and gradient descent is performed with any learning rate, then for each t ≥ 0 there is a diagonal matrixD(t) such that W j (t) = UD(t)U for each 1 ≤ j ≤ N.By Lemma 18, for any choice of learning rate η, the end-to-end matrix at time t is given by W 1:N (t) = UD(t) N U. As long as some diagonal element of D is negative, say equal to −λ < 0, then (t) = L N (W 1 (t),..., W N (t)) = 1 2 DISPLAYFORM0 Below we provide implementation details omitted from our experimental report (Section 4).The platform used for running the experiments is PyTorch . For compliance with our analysis, we applied PCA whitening to the numeric regression dataset from UCI Machine Learning Repository. That is, all instances in the dataset were preprocessed by an affine operator that ensured zero mean and identity covariance matrix. Subsequently, we rescaled labels such that the uncentered cross-covariance matrix Λ yx (see Section 2) has unit Frobenius norm (this has no effect on optimization other than calibrating learning rate and standard deviation of initialization to their conventional ranges). With the training objective taking the form of Equation FORMULA1, we then computed c -the global optimum -in accordance with the formula derived in Appendix A.In our experiments with linear neural networks, balanced initialization was implemented with the assignment written in step (iii) of Procedure 1. In the non-linear network experiment, we added, for each j ∈ {1, . . ., N − 1}, a random orthogonal matrix to the right of W j, and its transpose to the left of W j+1 -this assignment maintains the properties required from balanced initialization (see Footnote 7). During all experiments, whenever we applied grid search over learning rate, values between 10 −4 and 1 (in regular logarithmic intervals) were tried.
We analyze gradient descent for deep linear neural networks, providing a guarantee of convergence to global optimum at a linear rate.
944
scitldr
One of the fundamental problems in supervised classification and in machine learning in general, is the modelling of non-parametric invariances that exist in data. Most prior art has focused on enforcing priors in the form of invariances to parametric nuisance transformations that are expected to be present in data. However, learning non-parametric invariances directly from data remains an important open problem. In this paper, we introduce a new architectural layer for convolutional networks which is capable of learning general invariances from data itself. This layer can learn invariance to non-parametric transformations and interestingly, motivates and incorporates permanent random connectomes there by being called Permanent Random Connectome Non-Parametric Transformation Networks (PRC-NPTN). PRC-NPTN networks are initialized with random connections (not just weights) which are a small subset of the connections in a fully connected convolution layer. Importantly, these connections in PRC-NPTNs once initialized remain permanent throughout training and testing. Random connectomes makes these architectures loosely more biologically plausible than many other mainstream network architectures which require highly ordered structures. We motivate randomly initialized connections as a simple method to learn invariance from data itself while invoking invariance towards multiple nuisance transformations simultaneously. We find that these randomly initialized permanent connections have positive effects on generalization, outperform much larger ConvNet baselines and the recently proposed Non-Parametric Transformation Network (NPTN) on benchmarks that enforce learning invariances from the data itself. invariances directly from the data, with the only prior being the structure that allows them to do so. with an enhanced ability to learn non-parametric invariances through permanent random connectivity. we do not explore these biological connections in more detail, it is still an interesting observation. The common presence of random connections in the cortex at a local level leads us to ask: Is it Both train and test data were augmented leading to an increase in overall complexity of the problem. No architecture was altered in anyway between the two transformations i.e. they were not designed Discussion. We present all test errors for this experiment in Table.
A layer modelling local random connectomes in the cortex within deep networks capable of learning general non-parametric invariances from the data itself.
945
scitldr
We propose a fully-convolutional conditional generative model, the latent transformation neural network (LTNN), capable of view synthesis using a light-weight neural network suited for real-time applications. In contrast to existing conditional generative models which incorporate conditioning information via concatenation, we introduce a dedicated network component, the conditional transformation unit (CTU), designed to learn the latent space transformations corresponding to specified target views. In addition, a consistency loss term is defined to guide the network toward learning the desired latent space mappings, a task-divided decoder is constructed to refine the quality of generated views, and an adaptive discriminator is introduced to improve the adversarial training process. The generality of the proposed methodology is demonstrated on a collection of three diverse tasks: multi-view reconstruction on real hand depth images, view synthesis of real and synthetic faces, and the rotation of rigid objects. The proposed model is shown to exceed state-of-the-art in each category while simultaneously achieving a reduction in the computational demand required for inference by 30% on average. Generative models have been shown to provide effective frameworks for representing complex, structured datasets and generating realistic samples from underlying data distributions BID8. This concept has also been extended to form conditional models capable of sampling from conditional distributions in order to allow certain properties of the generated data to be controlled or selected BID20. These generative models are designed to sample from broad classes of the data distribution, however, and are not suitable for inference tasks which require identity preservation of the input data. Models have also been proposed which incorporate encoding components to overcome this by learning to map input data to an associated latent space representation within a generative framework BID18. The ing inference models allow for the defining structure/features of inputs to be preserved while specified target properties are adjusted through conditioning BID34. Conventional conditional models have largely relied on rather simple methods, such as concatenation, for implementing this conditioning process; however, BID21 have shown that utilizing the conditioning information in a less trivial, more methodical manner has the potential to significantly improve the performance of conditional generative models. In this work, we provide a general framework for effectively performing inference with conditional generative models by strategically controlling the interaction between conditioning information and latent representations within a generative inference model. In this framework, a conditional transformation unit (CTU), Φ, is introduced to provide a means for navigating the underlying manifold structure of the latent space. The CTU is realized in the form of a collection of convolutional layers which are designed to approximate the latent space operators defined by mapping encoded inputs to the encoded representations of specified targets (see FIG7). This is enforced by introducing a consistency loss term to guide the CTU mappings during training. In addition, a conditional discriminator unit (CDU), Ψ, also realized as a collection of convolutional layers, is included in the network's discriminator. This CDU is designed to improve the network's ability to identify and eliminate transformation specific artifacts in the network's predictions. The network has also been equipped with RGB balance parameters consisting of three values {θ R, θ G, θ B} designed to give the network the ability to quickly adjust the global color balance of FIG7: The conditional transformation unit Φ constructs a collection of mappings {Φ k} in the latent space which produce high-level attribute changes to the decoded outputs. Conditioning information is used to select the appropriate convolutional weights ω k for the specified transformation; the encoding l x of the original input image x is transformed to l y k = Φ k (l x) = conv(l x, ω k) and provides an approximation to the encoding l y k of the attribute-modified target image y k.the images it produces to better align with that of the true data distribution. In this way, the network is easily able to remove unnatural hues and focus on estimating local pixel values by adjusting the three RGB parameters rather than correcting each pixel individually. In addition, we introduce a novel estimation strategy for efficiently learning shape and color properties simultaneously; a task-divided decoder is designed to produce a coarse pixel-value map along with a refinement map in order to split the network's overall task into distinct, dedicated network components. 1. We introduce the conditional transformation unit, with a family of modular filter weights, to learn high-level mappings within a low-dimensional latent space. In addition, we present a consistency loss term which is used to guide the transformations learned during training.2. We propose a novel framework for color inference which separates the generative process into three distinct network components dedicated to learning i) coarse pixel value estimates, ii) pixel refinement scaling factors, and iii) the global RGB color balance of the dataset.3. We introduce the conditional discriminator unit designed to improve adversarial training by identifying and eliminating transformation-specific artifacts present in generated images. Each contribution proposed above has been shown to provide a significant improvement to the network's overall performance through a series of ablation studies. The ing latent transformation neural network (LTNN) is placed through a series of comparative studies on a diverse range of experiments where it is seen to outperform existing state-of-the-art models for (i) simultaneous multi-view reconstruction of real hand depth images in real-time, (ii) view synthesis and attribute modification of real and synthetic faces, and (iii) the synthesis of rotated views of rigid objects. Moreover, the CTU conditioning framework allows for additional conditioning information, or target views, to be added to the training procedure ad infinitum without any increase to the network's inference speed. BID4 has proposed a supervised, conditional generative model trained to generate images of chairs, tables, and cars with specified attributes which are controlled by transformation and view parameters passed to the network. The range of objects which can be synthesized using the framework is strictly limited to the pre-defined models used for training; the network can generate different views of these models, but cannot generalize to unseen objects to perform inference tasks. Conditional generative models have been widely used for geometric prediction BID23 BID32. These models are reliant on additional data, such as depth information or mesh models, to perform their target tasks, however, and cannot be trained using images alone. Other works have introduced a clamping strategy to enforce a specific organizational structure in the latent space BID27 BID17; these networks require extremely detailed labels for supervision, such as the graphics code parameters used to create each example, and are therefore very difficult to implement for more general tasks (e.g. training with real images). have proposed the appearance flow network (AFN) designed specifically for the prediction of rotated viewpoints of objects from images. This framework also relies on geometric concepts unique to rotation and is not generalizable to other inference tasks. The conditional variational autoencoder (CVAE) incorporates conditioning information into the standard variational autoencoder (VAE) framework BID16 and is capable of synthesizing specified attribute changes in an identity preserving manner BID29 BID34. CVAE-GAN BID0 further adds adversarial training to the CVAE framework in order to improve the quality of generated predictions. have introduced the conditional adversarial autoencoder (CAAE) designed to model age progression/regression in human faces. This is achieved by concatenating conditioning information (i.e. age) with the input's latent representation before proceeding to the decoding process. The framework also includes an adaptive discriminator with conditional information passed using a resize/concatenate procedure. BID13 have proposed Pix2Pix as a general-purpose image-to-image translation network capable of synthesizing views from a single image. The IterGAN model introduced by BID6 is also designed to synthesize novel views from a single image, with a specific emphasis on the synthesis of rotated views of objects in small, iterative steps. To the best of our knowledge, all existing conditional generative models designed for inference use fixed hidden layers and concatenate conditioning information directly with latent representations; in contrast to these existing methods, the proposed model incorporates conditioning information by defining dedicated, transformation-specific convolutional layers at the latent level. This conditioning framework allows the network to synthesize multiple transformed views from a single input, while retaining a fully-convolutional structure which avoids the dense connections used in existing inference-based conditional models. Most significantly, the proposed LTNN framework is shown to outperform state-of-the-art models in a diverse range of view synthesis tasks, while requiring substantially less FLOPs for inference than other conditional generative models (see Tables 1 & 2). In this section, we introduce the methods used to define the proposed LTNN model. We first give a brief overview of the LTNN network structure. We then detail how conditional transformation unit mappings are defined and trained to operate on the latent space, followed by a description of the conditional discriminator unit implementation and the network loss function used to guide the training process. Lastly, we describe the task-division framework used for the decoding process. The basic workflow of the proposed model is as follows:1. Encode the input image x to a latent representation l x = Encode(x).2. Use conditioning information k to select conditional, convolutional filter weights ω k.3. Map the latent representation l x to l y k = Φ k (l x) = conv(l x, ω k), an approximation of the encoded latent representation l y k of the specified target image y k.4. Decode l y k to obtain a coarse pixel value map and a refinement map.5. Scale the channels of the pixel value map by the RGB balance parameters and take the Hadamard product with the refinement map to obtain the final prediction y k.6. Pass real images y k as well as generated images y k to the discriminator, and use the conditioning information to select the discriminator's conditional filter weights ω k.7. Compute loss and update weights using ADAM optimization and backpropagation. A detailed overview of the proposed network structure is provided in Section A.1 of the appendix. Provide: Labeled dataset x, {y k} k∈T with target transformations indexed by a fixed set T, encoder weights θ E, decoder weights θ D, RGB balance parameters {θ R, θ G, θ B}, conditional transformation unit weights {ω k} k∈T, discriminator D with standard weights θ D and conditionally selected weights {ω k} k∈T, and loss function hyperparameters γ, ρ, λ, κ corresponding to the smoothness, reconstruction, adversarial, and consistency loss terms, respectively. The specific loss function components are defined in detail in Equations 1 -5 in Section 3.2.1: procedure TRAIN 2:x, {y k} k∈T = get train batch # Sample input and targets from training set 3: DISPLAYFORM0 for k in T do 5: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 # Assemble final network prediction for target 9: 10:# Update encoder, decoder, RGB, and CTU weights 11: DISPLAYFORM4 # Update discriminator and CDU weights 19: DISPLAYFORM5 Generative models have frequently been designed to explicitly disentangle the latent space in order to enable high-level attribute modification through linear, latent space interpolation. This linear latent structure is imposed by design decisions, however, and may not be the most natural way for a network to internalize features of the data distribution. Several approaches have been proposed which include nonlinear layers for processing conditioning information at the latent space level. In these conventional conditional generative frameworks, conditioning information is introduced by combining features extracted from the input with features extracted from the conditioning information (often using dense connection layers); these features are typically combined using standard vector concatenation, although some have opted to use channel concatenation (; BID0 . Six of these conventional conditional network designs are illustrated in FIG0 along with the proposed LTNN network design for incorporating conditioning information. Rather than directly concatenating conditioning information, we propose using a conditional transformation unit (CTU), consisting of a collection of distinct convolutional mappings in the network's latent space; conditioning information is then used to select which collection of weights, i.e. which CTU mapping, should be used in the convolutional layer to perform a specified transformation. For view point estimation, there is an independent CTU per viewpoint. Each CTU mapping maintains its own collection of convolutional filter weights and uses Swish activations BID26. The filter weights and Swish parameters of each CTU mapping are selectively updated by controlling the gradient flow based on the conditioning information provided. The CTU mappings are trained to transform the encoded, latent space representation of the network's input in a manner which produces high-level view or attribute changes upon decoding. This is accomplished by introducing a consistency term into the loss function which is minimized precisely when the CTU mappings behave as depicted in FIG7. In this way, different angles of view, light directions, and deformations, for example, can be synthesized from a single input image. The discriminator used in the adversarial training process is also passed conditioning information which specifies the transformation which the model has attempted to make. The conditional discriminator unit (CDU), consisting of convolutional layers with modular weights similar to the CTU, is trained to specifically identify unrealistic artifacts which are being produced by the corresponding conditional transformation unit mappings. For view point estimation, there is an independent CDU per viewpoint. The incorporation of this context-aware discriminator structure has significantly boosted the performance of the network (see Table 4 in the appendix).The proposed model uses the adversarial loss as the primary loss component. The discriminator, D, is trained using the adversarial loss term L D adv defined below in Equation 1. Additional loss terms corresponding to structural reconstruction, smoothness BID14, and a notion of consistency, are also used for training the encoder/decoder: DISPLAYFORM0 DISPLAYFORM1 where y k is the modified target image corresponding to an input x, ω k are the weights of the CDU mapping corresponding to the k th transformation, Φ k is the CTU mapping for the k th transformation, y k = Decode Φ k Encode[x] is the network prediction, and τ i,j is the two-dimensional, discrete shift operator. The final loss function for the encoder and decoder components is given by: DISPLAYFORM2 with hyperparameters typically selected so that λ, ρ γ, κ. The consistency loss is designed to guide the CTU mappings toward approximations of the latent space mappings which connect the latent representations of input images and target images as depicted in FIG7. In particular, the consistency term enforces the condition that the transformed encoding, DISPLAYFORM3, approximates the encoding of the k th target image, l y k = Encode[y k], during the training process. The decoding process has been divided into three tasks: estimating the refinement map, pixel-values, and RGB color balance of the dataset. We have found this decoupled framework for estimation helps the network converge to better minima to produce sharp, realistic outputs. The decoding process begins with a series of convolutional layers followed by bilinear interpolation to upsample the low resolution latent information. The last component of the decoder's upsampling process consists of two distinct transpose convolutional layers used for task separation; one layer is allocated for predicting the refinement map while the other is trained to predict pixel-values. The refinement map layer incorporates a sigmoidal activation function which outputs scaling factors intended to refine the coarse pixel value estimations. RGB balance parameters, consisting of three trainable variables, are used as weights for balancing the color channels of the pixel value map. The Hadamard product of the refinement map and the RGB-rescaled value map serves as the network's final output: DISPLAYFORM0 In this way, the network has the capacity to mask values which lie outside of the target object (i.e. by setting refinement map values to zero) which allows the value map to focus on the object itself during the training process. Experimental show that the refinement maps learn to produce masks which closely resemble the target objects' shapes and have sharp drop-offs along the boundaries. BID33. The input depth-map hand pose image is shown to the far left, followed by the network predictions for 9 synthesized view points. The views synthesized using LTNN are seen to be sharper and also yield higher accuracy for pose estimation (see FIG4). To show the generality of our method, we have conducted a series of diverse experiments: (i) hand pose estimation using a synthetic training set and real NYU hand depth image data BID33 for testing, (ii) synthesis of rotated views of rigid objects using the real ALOI dataset BID7 and synthetic 3D chair dataset BID1, (iii) synthesis of rotated views using a real face dataset BID5, and (iv) the modification of a diverse range of attributes on a synthetic face dataset . For each experiment, we have trained the models using 80% of the datasets. Since ground truth target depth images were not available for the real hand dataset, an indirect metric has been used to quantitatively evaluate the model as described in Section 4.1. Ground truth data was available for all other experiments, and models were evaluated directly using the L 1 mean pixel-wise error and the structural similarity index measure (SSIM) BID23 BID19 (the masked pixel-wise error L M 1 BID6) was used in place of the L 1 error for the ALOI experiment). More details regarding the precise training configurations and the creation of the synthetic datasets can be found in the appendix. To evaluate the proposed framework with existing works, two comparison groups have been formed: conditional inference models (CVAE-GAN, CVAE, and CAAE) with comparable encoder/decoder structures for comparison on experiments with non-rigid objects, and view synthesis models (MV3D BID32, IterGAN, Pix2Pix, AFN , and TVSN BID23 ) for comparison on experiments with rigid objects. Additional experiments have been performed to compare the proposed CTU conditioning method with other conventional concatenation methods (see FIG0 ; are shown in FIG3 . Qualitative and comparisons for each experiment are provided in the appendix. Hand pose experiment: Since ground truth predictions for the real NYU hand dataset were not available, the LTNN model has been trained using a synthetic dataset generated using 3D mesh hand models. The NYU dataset does, however, provide ground truth coordinates for the input hand pose; using this we were able to indirectly evaluate the performance of the model by assessing the accuracy of a hand pose estimation method using the network's multi-view predictions as input. More specifically, the LTNN model was trained to generate 9 different views which were then fed into the pose estimation network from BID3 (also trained using the synthetic dataset).A comparison of the quantitative hand pose estimation is provided in FIG3 where the proposed LTNN framework is seen to provide a substantial improvement over existing methods; qualitative are also available in FIG1. With regard to real-time applications, the proposed model runs at 114 fps without batching and at 1975 fps when applied to a mini-batch of size 128 (using a single TITAN Xp GPU and an Intel i7-6850K CPU). The stereo face database BID5, consisting of images of 100 individuals from 10 different viewpoints, was used for experiments with real faces; these faces were segmented using the method of BID22 and then cropped and centered to form the final dataset. The LTNN model was trained to synthesize images of input faces corresponding to three consecutive horizontal rotations. As shown in FIG4, our method significantly outperforms the CVAE-GAN, CAAE, and IterGAN models in both the L 1 and SSIM metrics; qualitative are also available in FIG5 and Section A.6 of the appendix. Real object experiment: The ALOI dataset BID7, consisting of images of 1000 real objects viewed from 72 rotated angles (covering one full 360• rotation), has been used for experiments on real objects. As shown in Table 1 and in Figure 8, our method outperforms other state-of-the-art methods with respect to the L 1 metric and achieves comparable SSIM metric scores. Of note is the fact that the LTNN framework is capable of effectively performing the specified rigid-object transformations using only a single image as input, whereas most state-of-the-art view synthesis methods require additional information which is not practical to obtain for real datasets. For example, MV3D requires depth information and TVSN requires 3D models to render visibility maps for training which is not available in the ALOI dataset. We have tested our model's ability to perform 360• view estimation on the chairs and compared the with the other state-of-the-art methods. The proposed model outperforms existing models specifically designed for the task of multi-view prediction and require the least FLOPs for inference compared with all other methods (see Table 2). Table 2: Results for 3D chair 360• view synthesis. The proposed method uses significantly less parameters during inference, requires the least FLOPs, and yields the fastest inference times. FLOP calculations correspond to inference for a single image with resolution 256×256×3. To evaluate the proposed framework's performance on a more diverse range of attribute modification tasks, a synthetic face dataset and five conditional generative models with comparable encoder/decoder structures to the LTNN model have been selected for comparison. These models have been trained to synthesize discrete changes in elevation, azimuth, light direction, and age from a single grayscale image; are shown in Table 3 and ablation are available in Table 4. Near continuous attribute modification is also possible within the proposed framework, and distinct CTU mappings can be composed with one another to synthesize multiple modifications simultaneously; more details and related figures are provided in sections A.7.4 and A.7.5 of the appendix. Table 3: Results for simultaneous colorization and attribute modification on synthetic face dataset. DISPLAYFORM0 In this work, we have introduced an effective, general framework for incorporating conditioning information into inference-based generative models. We have proposed a modular approach to incorporating conditioning information using CTUs and a consistency loss term, defined an efficient task-divided decoder setup for deconstructing the data generation process into managable subtasks, and shown that a context-aware discriminator can be used to improve the performance of the adversarial training process. The performance of this framework has been assessed on a diverse range of tasks and shown to outperform state-of-the-art methods. At the bottle-neck between the encoder and decoder, a conditional transformation unit (CTU) is applied to map the 2×2 latent features directly to the transformed 2×2 latent features on the right. This CTU is implemented as a convolutional layer with filter weights selected based on the conditioning information provided to the network. The noise vector z ∈ R 4 from normal distribution N is concatenated to the transformed 2×2 features and passed to the decoder for the face attributes task only. The 32×32 features near the end of the decoder component are processed by two independent convolution transpose layers: one corresponding to the value estimation map and the other corresponding to the refinement map. The channels of the value estimation map are rescaled by the RGB balance parameters, and the Hadamard product is taken with the refinement map to produce the final network output. For the ALOI data experiment, we have followed the encoder and decoder structure, and for the stereo face dataset BID5 experiment, we have added an additional Block v1 layer in the encoder and decoder to utilize the full 128×128×3 resolution images. The encoder incorporates two main block layers, as defined in Figure A. 2, which are designed to provide efficient feature extraction; these blocks follow a similar design to that proposed by, but include dense connections between blocks, as introduced by BID10. We normalize the output of each network layer using the batch normalization method as described in BID12. For the decoder, we have opted for a minimalist design, inspired by the work of BID24. Standard convolutional layers with 3 × 3 filters and same padding are used through the penultimate decoding layer, and transpose convolutional layers with 5 × 5 filters and same padding are used to produce the value-estimation and refinement maps. All parameters have been initialized using the variance scaling initialization method described in BID9.Our method has been implemented and developed using the TensorFlow framework. The models have been trained using stochastic gradient descent (SGD) and the ADAM optimizer BID15 with initial parameters: learning rate = 0.005, β 1 = 0.9, and β 2 = 0.999 (as defined in the TensorFlow API r1.6 documentation for tf.train.AdamOptimizer)., along with loss function hyperparameters: λ = 0.8, ρ = 0.2, γ = 0.0002, and κ = 0.00005 (as introduced in FORMULA8). The discriminator is updated once every two encoder/decoder updates, and one-sided label smoothing BID28 has been used to improve stability of the discriminator training procedure. All datasets have also been normalized to the interval for training. Once the total number of output channels, N out, is specified, the remaining N out − N in output channels are allocated to the non-identity filters (where N in denotes the number of input channels). For the Block v1 layer at the start of the proposed LTNN model, for example, the input is a single grayscale image with N in = 1 channel and the specified number of output channels is N out = 32. One of the 32 channels is accounted for by the identity component, and the remaining 31 channels are the three non-identity filters. When the remaining channel count is not divisible by 3 we allocate the remainder of the output channels to the single 3 × 3 convolutional layer. Swish activation functions are used for each filter, however the filters with multiple convolutional layers (i.e. the right two filters in the Block v1 diagram) do not use activation functions for the intermediate 3 × 3 convolutional layers (i.e. those after the 1 × 1 layers and before the final 3 × 3 layers). A.2.1 DATASET A kinematic hand model with 33 degrees of freedom has been used to generate 200,000 distinct hand poses with nine depth images from different viewpoints for each pose. We sampled hand pose uniformly from each of the 18 joint angle parameters, covering a full range of hand articulations. The nine viewpoints are centered around a designated input view and correspond to 30• changes in the spherical coordinates of the viewer (i.e. 30• up, 30• right, 30• up and 30• right, etc.). Testing was performed on the MSRA and NYU BID33 hand datasets. We follow the training procedure proposed by BID3 for the hand pose estimation network; after training, the LTNN multi-view predictions are fed into the network and the accuracy of the predicted angles is used to assess how well these network predictions approximate the unknown, true views. As seen in Figure A.3, the optimal are obtained when all 9 synthesized views points are fed into the pose estimation network., we have used images of resolution 256 × 256 × 3 from the ALOI database for training and testing. While the LTNN model is capable of making simultaneous predictions of multiple viewpoints, as illustrated in the hand and chair experiments, the Pix2Pix BID13 and IterGAN BID6 networks are designed to produce a single synthesized view. To make fair comparisons between these existing networks and the proposed LTNN model, each model has been trained only to produce a single 30• rotated view of the ALOI objects. In particular, only two CTU mappings were trained: one corresponding to the identity, and one corresponding to the rotated view. Figure A.8: Experiment on unseen objects from the ALOI real object dataset. First row of is ground truth, second row is ours, third row is ours without task-division, fourth row is IterGAN, and bottom row is Pix2Pix. As shown from the figure, our methods are sharper and realistic than other methods in the majority of generated views. Chairs from the ShapeNet BID2 ) 3D model repository have been rotated horizontally 20• 17 times and vertically 10 • 3 times; 6742 chairs have been selected following the data creation methodology from BID23. A.6 REAL FACE EXPERIMENT A.6.1 DATASET The original dataset has and 100 identification and 5 different views from 2 distinct cameras, which 1000 face images in total. Since the dataset is not huge, we would like to reduce noise and perform face segmentation with BID22 and manually filtered badly segmented faces from the with the segmentation method and create 300 × 300 × 3 face images. For training, we resize the original images into 128 × 128 × 3 resolution. Each face has been rendered at four distinct age ranges, and four different lighting directions have been used. The orientation of faces is allowed to vary in elevation from −20• to 29• by increments of 7• and in azimuth from 10 • to 150• by increments of 20•. To demonstrate the model's colorization capabilities, the input images have been converted to gray-scale using the luminosity method. Table 4: Ablation/comparison using identical encoder, decoder, and training procedure. DISPLAYFORM0 Task-division: An overview of the task-division decoding procedure applied to the synthetic face dataset is provided in Figure A.13. As noted in Section 3.3, the refinement maps tend to learn to produce masks which closely resemble the target objects' shapes and have sharp drop-offs along the objects' boundaries. In addition to masking extraneous pixels, these refinement maps have been shown to apply local color balancing by, for example, filtering out the green and blue channels near lips when applied to human faces (i.e. the refinement maps for the green and blue channels show darker regions near the lips, thus allowing for the red channel to be expressed more distinctly).The use of a task-divided decoder can also be seen to remove artifacts in the generated images in Figure A. 13; e.g. removal of the blurred eyebrow (light), elimination of excess hair near the side of ear (azimuth), and reduction of the reddish vertical stripe on the forehead (age). As noted in Section 4.3, near-continuous attribute modification can be performed by piecewise-linear interpolation in the latent space. For example, we can train 9 CTU mappings {Φ k} 8 k=0 corresponding to discrete, incremental 7• changes in elevation {θ k}. In this setting, the network predictions for an elevation change of θ 0 = 0• and θ 1 = 7• are given by Decode[Φ 0 (l x)] and Decode[Φ 1 (l x)], respectively (where l x denotes the encoding of the input image). To predict an elevation change of 3.5•, we can perform linear interpolation in the latent space between the representations Φ 0 (l x) and Φ 1 (l x); that is, we may take our network prediction for the intermediate change of 3.5• to be: y = Decode[l y] where l y = 0.5 · Φ 0 (l x) + 0.5 · Φ 1 (l x)Likewise, to approximate a change of 10.5• in elevation we may take Decode[l y], where l y = 0.5 · Φ 1 (l x) + 0.5 · Φ 2 (l x), as the network prediction. More generally, we can interpolate between the latent CTU map representations to predict a change θ via: DISPLAYFORM0 with k ∈ {0, . . ., 7} and λ ∈ chosen so that θ = λ · θ k + (1 − λ) · θ k+1. Accordingly, the proposed framework naturally allows for continuous attribute changes to be approximated by using this piecewise-linear latent space interpolation procedure. Figure A.19: Near continuous attribute modification is attainable using piecewise-linear interpolation in the latent space. Provided a gray-scale image (corresponding to the faces on the far left), modified images corresponding to changes in light direction (first), age (second), azimuth (third), and elevation (fourth) are produced with 17 degrees of variation. These attribute modified images have been produced using 9 CTU mappings, corresponding to varying degrees of modification, and linearly interpolating between the discrete transformation encodings in the latent space. Additional qualitative for near-continuous attribute modification can be found in Section A.7.6.
We introduce an effective, general framework for incorporating conditioning information into inference-based generative models.
946
scitldr
We describe three approaches to enabling an extremely computationally limited embedded scheduler to consider a small number of alternative activities based on resource availability. We consider the case where the scheduler is so computationally limited that it cannot backtrack search. The first two approaches precompile resource checks (called guards) that only enable selection of a preferred alternative activity if sufficient resources are estimated to be available to schedule the remaining activities. The final approach mimics backtracking by invoking the scheduler multiple times with the alternative activities. We present an evaluation of these techniques on mission scenarios (called sol types) from NASA's next planetary rover where these techniques are being evaluated for inclusion in an onboard scheduler. Embedded schedulers must often operate with very limited computational resources. Due to such limitations, it is not always feasible to develop a scheduler with a backtracking search algorithm. This makes it challenging to perform even simple schedule optimization when doing so may use resources needed for yet unscheduled activities. In this paper, we present three algorithms to enable such a scheduler to consider a very limited type of preferred activity while still scheduling all required (hereafter called mandatory) activities. Preferred activities are grouped into switch groups, sets of activities, where each activity in the set is called a switch case, and exactly one of the activities in the set must be scheduled. They differ only by how much time, energy, and data volume they consume and the goal is for the scheduler to schedule the most desirable activity (coincidentally the most resource consuming activity) without sacrificing any other mandatory activity. The target scheduler is a non-backtracking scheduler to be onboard the NASA Mars 2020 planetary rover BID9 that schedules in priority first order and never removes or moves an activity after it is placed during a single run of the scheduler. Because the scheduler does not backtrack, it is challenging to ensure that scheduling a consumptive switch case will not use too many resources Copyright c 2019, California Institute of Technology. Government Sponsorship Acknowledged. and therefore prevent a later (in terms of scheduling order, not necessarily time order) mandatory activity from being scheduled. The onboard scheduler is designed to make the rover more robust to run-time variations by rescheduling multiple times during execution BID4. If an activity ends earlier or later than expected, then rescheduling will allow the scheduler to consider changes in resource consumption and reschedule accordingly. Our algorithms to schedule switch groups must also be robust to varying execution durations and rescheduling. We have developed several approaches to handle scheduling switch groups. The first two, called guards, involve reserving enough sensitive resources (time, energy, data volume) to ensure all later required activities can be scheduled. The third approach emulates backtracking under certain conditions by reinvoking the scheduler multiple times. These three techniques are currently being considered for implementation in the Mars 2020 onboard scheduler. For the scheduling problem we adopt the definitions in (Rabideau and Benowitz 2017). The scheduler is given• a list of activities A 1 p 1, d 1, R 1, e 1, dv 1, Γ 1, T 1, D 1... A n p n, d n, R n, e n, dv n, Γ n, T n, D n• where p i is the scheduling priority of activity A i;• d i is the nominal, or predicted, duration of activity A i;• R i is the set of unit resources R i1... R im that activity A i will use;• e i and dv i are the rates at which the consumable resources energy and data volume respectively are consumed by activity A i;• Γ i1... Γ ir are non-depletable resources used such as sequence engines available or peak power for activity A i;• T i is a set of start time windows [T ij start, T i j pref erred, DISPLAYFORM0 • D i is a set of activity dependency constraints for activity A i where A p → A q means A q must execute successfully before A p starts. The goal of the scheduler is to schedule all mandatory activities and the best switch cases possible while respecting individual and plan-wide constraints. Each activity is assigned a scheduling priority. This priority determines the order in which the activity will be considered for addition to the schedule. The scheduler attempts to schedule the activities in priority order, therefore: higher priority activities can block lower priority activities from being scheduled and higher priority activities are more likely to appear in the schedule. Mandatory Activities are activities, m 1... m j ⊆ A, that must be scheduled. The presumption is that the problem as specified is valid, that is to say that a schedule exists that includes all of the mandatory activities, respects all of the provided constraints, and does not exceed available resources. In addition, activities can be grouped into Switch Groups. The activities within a switch group are called switch cases and vary by how many resources (time, energy, and data volume) they consume. It is mandatory to schedule exactly one switch case and preferable to schedule a more resource intensive one, but not at the expense of another mandatory activity. For example, one of the Mars 2020 instruments takes images to fill mosaics which can vary in size; for instance we might consider 1x4, 2x4, or 4x4 mosaics. Taking larger mosaics might be preferable, but taking a larger mosaic takes more time, takes more energy, and produces more data volume. These alternatives would be modeled by a switch group that might be as follows: DISPLAYFORM1 The desire is for the scheduler to schedule the activity M osaic 4x4 but if it does not fit then try scheduling M osaic 2x4, and eventually try M osaic 1x4 if the other two fail to schedule. It is not worth scheduling a more consumptive switch case if doing so will prevent a future, lower priority mandatory activity from being scheduled due to lack of resources. Because our computationally limited scheduler cannot search or backtrack, it is a challenge to predict if a higher level switch case will be able to fit in the schedule without consuming resources that will cause another lower priority mandatory activity to be forced out of the schedule. Consider the following example in FIG1 where the switch group consists of activities B1, B2, and B3 and DISPLAYFORM2 Each activity in this example also has one start time window from T istart to T i end.B3 is the most resource intensive and has the highest priority so the scheduler will first try scheduling B3. As shown in FIG1, scheduling B3 will prevent the scheduler from placing activity C at a time satisfying its execution constraints. So, B3 should not be scheduled. The question might arise as to why switch groups cannot simply be scheduled last in terms of scheduling order. This is difficult for several reasons: 1) We would like to avoid gaps in the schedule which is most effectively done by scheduling primarily left to right temporally, and 2) if another activity is dependent on an activity in a switch group, then scheduling the switch group last would introduce complications to ensure that the dependencies are satisfied. The remainder of the paper is organized as follows. First, we describe several plan wide energy constraints that must be satisfied. Then, we discuss two guard approaches to schedule preferred activities, which place conditions on the scheduler that restrict the placement of switch cases under certain conditions. We then discuss various versions of an approach which emulates backtracking by reinvoking the scheduler multiple times with the switch cases. We present empirical to evaluate and compare these approaches. There are several energy constraints which must be satisfied throughout scheduling and execution. The scheduling process for each sol, or Mars day, begins with the assumption that the rover is asleep for the entire time spanning the sol. Each time the scheduler places an activity, the rover must be awake so the energy level declines. When the rover is asleep the energy level increases. Two crucial energy values which must be taken into account are the Minimum State of Charge (SOC) and the Minimum Handover State of Charge. The state of charge, or energy value, cannot dip below the Minimum SOC at any point. If scheduling an activity would cause the energy value to dip below the Minimum SOC, then that activity will not be scheduled. In addition, the state of charge cannot be below the Minimum Handover SOC at the Handover Time, in effect when the next schedule starts (e.g., the handover SOC of the previous plan is the expected beginning SOC for the subsequent schedule).In order to preserve battery life, the scheduler must also consider the Maximum State of Charge constraint. Exceeding the Maximum SOC hurts long term battery performance and the rover will perform shunting. To prevent it from exceeding this value, the rover may be kept awake. First we will discuss two guard methods to schedule switch cases, the Fixed Point guard and the Sol Wide guard. Both of these methods attempt to schedule switch cases by reserving enough time and energy to schedule the remaining mandatory activities. For switch groups, this means that resources will be reserved for the least resource consuming activity since it is mandatory to schedule exactly one activity in the switch group. The method through which both of these guard approaches reserve enough time to schedule future mandatory activities is the same. They differ in how they ensure there is enough energy. While the Fixed Point guard reserves enough energy at a single fixed time pointthe time at which the least resource consuming switch case is scheduled to end in the nominal schedule, the Sol Wide guard attempts to reserve sufficient energy by keeping track of the energy balance in the entire plan, or sol. In this discussion, we do not attempt to reserve data volume while computing the guards as it is not expected to be as constraining of a resource as time or energy. We aim to take data volume into account as we continue to do work on this topic. Both the time and energy guards are calculated offline before execution occurs using a nominal schedule. Then, while rescheduling during execution, the constraints given by the guards are applied to ensure that scheduling a higher level switch case will not prevent a future mandatory activity from being scheduled. If activities have ended sufficiently early and freed up resources, then it may be possible to reschedule with a more consumptive switch case. First, we will discuss how the Fixed Point and Sol Wide guards ensure enough time will be reserved to schedule remaining mandatory activities while attempting to schedule a more resource consuming switch case. If a preferred time, T i j pref erred, is specified for an activity, the scheduler will try to place an activity closest to its preferred time while obeying all other constraints. Otherwise, the scheduler will try to place the activity as early as possible. Each switch group in the set of activities used to create a nominal schedule includes only the nominal, or least resource consuming switch case, and all activities take their predicted duration. First, we generate a nominal schedule and find the time at which the nominal switch case is scheduled to complete, as shown in FIG2. We then manipulate the execution time constraints of the more resource intensive switch cases, B2 and B3 in FIG2, so that they are constrained to complete by T N ominal as shown in Equation 2. Thus, a more (time) resource consuming switch case will not use up time from any remaining lower priority mandatory activities. If an activity has more than one start time window, then we only alter the one which contains T N ominal and remove the others. If a prior activity ends earlier than expected during execution and frees up some time, then it may be possible to schedule a more consumptive switch case while obeying the time guard given by the altered execution time constraints. DISPLAYFORM0 Since we found that the above method was quite conservative and heavily constrained the placement of a more resource consuming switch case, we attempted a preferred time method to loosen the time guard. In this approach, we set the preferred time of the nominal switch case to its latest start time before generating the nominal schedule. Then, while the nominal schedule is being generated, the scheduler will try to place the nominal switch case as late as possible since the scheduler will try to place an activity as close to its preferred time as possible. As a , T N ominal will likely be later than what it would be if the preferred time were not set in this way. As per Equation 2, the latest start times, T Bi j end, of the more resource consuming switch cases may be later than what they would be using the previous method where the preferred time was not altered, thus allowing for wider start time windows for higher level switch cases. This method has some risks. If the nominal switch case was placed as late as possible, it could use up time from another mandatory activity with a tight execution window that it would not otherwise have used up if it was placed earlier, as shown in Fixed Point Minimum State of Charge Guard The Fixed Point method attempts to ensure that scheduling a more resource consuming switch case will not cause the energy to violate the Minimum SOC while scheduling any future mandatory activities by reserving sufficient energy at a single, fixed point in time, T N ominal as shown in FIG5. The guard value for the Minimum SOC is the state of charge value at T N ominal while constructing the nominal schedule. When attempting to schedule a more resource intensive switch case, a constraint is placed on the scheduler so that the energy cannot fall below the Minimum SOC guard value at time T N ominal. If an activity ends early (and uses fewer resources than expected) during execution, it may be possible to satisfy this guard while scheduling a more consumptive switch case. Then, while attempting to place a more consumptive switch case, a constraint is placed on the scheduler so that the extra energy required by the switch case does not exceed Energy Leftover from the nominal schedule as in FIG6. For example, if we have a switch group consisting of three activities, B1, B2, and B3 and d B3 > d B2 > d B1 and each switch case consumes e Watts of power, we must ensure that the following inequality holds at the time the scheduler is attempting to schedule a higher level switch case: DISPLAYFORM0 There may be more than one switch group in the schedule. Each time a higher level switch case is scheduled, the Energy Leftover value is decreased by the extra energy required to schedule it. When the scheduler tries to place a switch case in another switch group, it will check against the updated Energy Leftover. Sol Wide Handover State of Charge Guard The Sol Wide handover SOC guard only schedules a more resource consumptive switch case if doing so will not cause the energy to dip below the Handover SOC at handover time. First, we use the nominal schedule to calculate how much energy is needed to schedule remaining mandatory activities. Having a Maximum SOC constraint while calculating this value may produce an inaccurate since any energy that would exceed the Maximum SOC would not be taken into account. So, in order to have an accurate prediction of the energy balance as activities are being scheduled, this value is calculated assuming there is no Maximum SOC constraint. 8. The Maximum SOC constraint is only removed while computing the guard offline to gain a clear understanding of the energy balance but during execution it is enforcedAs shown in FIG8, the energy needed to schedule the remaining mandatory activities is the difference between the energy level just after the nominal switch case has been scheduled, call this E1, and after all activities have been scheduled, call this energy level E2. Energy N eeded = E1 − E2 Then, a constraint is placed on the scheduler so that the energy value after a higher level switch case is scheduled must be at least: DISPLAYFORM1 By placing this energy constraint, we hope to prevent the energy level from falling under the Minimum Handover SOC by the time all activities have been scheduled. Sol Wide Minimum State of Charge Guard While we ensure that the energy will not violate the minimum Handover SOC by keeping track of the energy balance, it is possible that scheduling a longer switch case will cause the energy to fall below the Minimum SOC. To limit the chance of this happening, we run a Monte Carlo of execution offline while computing the sol wide energy guard. We use this Monte Carlo to determine if a mandatory activity was not scheduled due to a longer switch case being scheduled earlier. If this occurs in any of the Monte Carlos of execution, then we increase the guard constraint in Equation 5. We first find the times at which each mandatory activity was scheduled to finish in the nominal schedule. Then, we run a Monte Carlo of execution with the input plan containing the guard and all switch cases. Each Monte Carlo differs in how long each activity takes to execute compared to its original predicted duration in the schedule. If a mandatory activity was not executed in any of the Monte Carlo runs and a more resource consuming switch case was executed before the time at which that mandatory activity was scheduled to complete in the nominal schedule, then we increase the Sol Wide energy guard value in Equation 5 by a fixed amount. We aim to compose a better heuristic to increase the guard value as we continue work on this subject. The Multiple Scheduler Invocation (MSI) approach emulates backtracking by reinvoking the scheduler multiple times with the switch cases. MSI does not require any precomputation offline before execution as with the guards and instead reinvokes the scheduler multiple times during execution. During execution, the scheduler reschedules (e.g., when activities end early) with only the nominal switch case as shown in FIG10 until an MSI trigger is satisfied. At this point, the scheduler is reinvoked multiple times, at most once per switch case in each switch group. In the first MSI invocation, the scheduler attempts to schedule the highest level switch case as shown in FIG10. If the ing schedule does not contain all mandatory activities, then the scheduler will attempt to schedule the next highest level switch case, as in 7c, and so on. If none of the higher level switch cases can be successfully scheduled then the schedule is regenerated with the nominal switch case. If activities have ended early by the time MSI is triggered and ed in more resources than expected, then the goal is for this approach to generate a schedule with a more consumptive switch case if it will fit (assuming nominal activity durations for any activities that have not yet executed).There are multiple factors that must be taken into consideration when implementing MSI:When to Trigger MSI There are two options to trigger the MSI process (first invocation while trying to schedule the switch case):1. Time Offset. Start MSI when the current time during execution is some fixed amount of time, X, from the time at which the nominal switch case is scheduled to start in the current schedule (shown in FIG11).2. Switch Ready. Start MSI when an activity has finished executing and the nominal switch case activity is the next activity scheduled to start (shown in FIG12). If the highest level switch case activity is not able to be scheduled in the first invocation of MSI, then the scheduler must be invoked again. We choose to reschedule as soon as possible after the most recent MSI invocation. This method risks over-consumption of the CPU if the scheduler is invoked too frequently. To handle this, we may need to rely on a process within the scheduler called throttling. Throttling places a constraint which imposes a minimum time delay between invocations, preventing the scheduler from being invoked at too high of a rate. An alternative is to reschedule at an evenly split, fixed cadence to avoid over-consumption of the CPU; we plan to explore this approach in the future. In some situations, the nominal switch case activity in the original plan may become committed before or during the MSI invocations as shown in FIG1. An activity is committed if its scheduled start time is between the start and end of the commit window BID2. A committed activity cannot be rescheduled and is committed to execute. If the nominal switch case remains committed, the scheduler will not be able to elevate to a higher level switch case. There are two ways to handle this situation:1. Commit the activity. Keep the nominal switch case activity committed and do not try to elevate to a higher level switch case.2. Veto the switch case. Veto the nominal switch case so that it is no longer considered in the current schedule. When an activity is vetoed, it is removed from the current schedule and will be considered in a future invocation of the scheduler. Therefore, by vetoing the nominal switch case, (a) B1 is the nominal switch case. Since an activity has not finished executing and B1 is not the next activity, MSI cannot begin yet.(b) Since A finished executing early, and B1 is the next activity, the MSI process can begin. it will no longer be committed and the scheduler will continue the MSI invocations in an effort to elevate the switch case. Handling Rescheduling After MSI Completes but before the Switch Case is Committed After MSI completes, there may be events that warrant rescheduling (e.g., an activity ending early) before the switch case is committed. When the scheduler is reinvoked to account for the event, it must know which level switch case to consider. If we successfully elevated a switch case, we choose to reschedule with that higher level switch case. Since the original schedule generated by MSI with the elevated switch case was in the past and did not undergo changes from this rescheduling, it is possible the schedule will be inconsistent and may lead to complications while scheduling later mandatory activities. An alternative we plan to explore in the future is to disable rescheduling until the switch case is committed. However, this approach would not allow the scheduler to regain time if an activity ended early and caused rescheduling. In order to evaluate the performance of the above methods, we apply them to various sets of inputs comprised of activities with their constraints and compare them against each other. The inputs are derived from sol types. Sol types are currently the best available data on expected Mars 2020 rover operations (Jet Propulsion Laboratory 2017a). In order to construct a schedule and simulate plan execution, we use the Mars 2020 surrogate scheduler -an implementation of the same algorithm as the Mars 2020 onboard scheduler (Rabideau and Benowitz 2017), but intended for a Linux workstation environment. As such, it is expected to produce the same schedules as the operational scheduler but runs much faster in a workstation environment. The surrogate scheduler is expected to assist in validating the flight scheduler implementation and also in ground operations for the mission BID1. Each sol type contains between 20 and 40 activities. Data from the Mars Science Laboratory Mission (Jet Propulsion Laboratory 2017b; BID4 BID5 indicates that activity durations were quite conservative and completed early by around 30%. However, there is a desire by the mission to operate with a less conservative margin to increase productivity. In our model to determine activity execution durations, we choose from a normal distribution where the mean is 90% of the predicted, nominal activity duration. The standard deviation is set so that 10 % of activity execution durations will be greater than the nominal duration. For our analysis, if an activity's execution duration chosen from the distribution is longer than its nominal duration, then the execution duration is set to be the nominal duration to avoid many complications which from activities running long (e.g., an activity may not be scheduled solely because another activity ran late). Detailed discussion of this is the subject of another paper. We do not explicitly change other activity resources such as energy and data volume since they are generally modeled as rates and changing activity durations implicitly changes energy and data volume as well. We create 10 variants derived from each of 8 sol types by adding one switch group to each set of inputs for a total of 80 variants. The switch group contains three switch cases, A nominal, A 2x, and A 4x where DISPLAYFORM0 In order to evaluate the effectiveness of each method, we have developed a scoring method based on how many and what type of activities are able to be scheduled successfully. The score is such that the value of any single mandatory activity being scheduled is much greater than that of any combination of switch cases (at most one activity from each switch group can be scheduled).Each mandatory activity that is successfully scheduled, including whichever switch case activity is scheduled, contributes one point to the mandatory score. A successfully scheduled switch case that is 2 times as long as the original activity contributes 1/2 to the switch group score. A successfully scheduled switch case that is 4 times as long as the original, nominal switch case contributes 1 to the switch group score. If only the nominal switch case is able to be scheduled, it does not contribute to the switch group score at all. There is only one switch group in each variant, so the maximum switch group score for a variant is 1. Since scheduling a mandatory activity is of much higher importance than scheduling any number of higher level switch case, the mandatory activity score is weighted at a much larger value then the switch group score. In the following empirical , we average the mandatory and switch groups scores over 20 Monte Carlo runs of execution for each variant. We compare the different methods to schedule switch cases over varying incoming state of charge values (how much energy exists at the start) and determine which methods in 1) scheduling all mandatory activities and 2) the highest switch group scores. The upper bound for the theoretical maximum switch group score is given by an omniscient scheduler-a scheduler which has prior knowledge of the execution duration for each activity. Thus, this scheduler is aware of the amount of resources that will be available to schedule higher level switch cases given how long activities take to execute compared to their predicted, nominal duration. The input activity durations fed to this omniscient scheduler are the actual execution durations. We run the omniscient scheduler at most once per switch case. First, we try to schedule with only the highest level switch case and if that fails to schedule all mandatory activities, then we try with the next level switch case, and so on. First, we determine which methods are able to successfully schedule all mandatory activities, indicated by the Maximum Mandatory Score in FIG1. Since scheduling a mandatory activity is worth much more than scheduling any number of higher level switch cases, we only compare switch group scores between methods that successfully schedule all mandatory activities. In order to evaluate the ability of each method to schedule all mandatory activities, we also compare against two other methods, one which always elevates to the highest level switch case while the other always elevates to the medium level switch case. We see in FIG1 that always elevating to the highest (3rd) level performs the worst and drops approximately 0.25 mandatory activities per sol, or 1 activity per 4 sols on average while always elevating to the second highest level drops close to 0.07 mandatory activities per sol, or 1 activity per 14 sols on average. For comparison, the study described in BID4 showed that approximately 1 mandatory activity was dropped every 90 sols, indicating that both of these heuristics perform poorly. We found that using preferred time to guard against time FIG1: Switch Group Score vs Incoming SOC for Methods which Schedule all Mandatory Activities caused mandatory activities to drop for both the fixed point and sol wide guard (for the reason described in the Guarding for Time section) while using the original method to guard against time did not. We see in FIG1 that the preferred time method with the fixed point guard drops on average about 0.04 mandatory activities per sol, or 1 activity every 25 sols while the sol wide guard drops on average about 0.1 mandatory activities per sol, or 1 activity every 10 sols. We also see that occasionally fewer mandatory activities are scheduled with a higher incoming SOC. Since using preferred time does not properly ensure that all remaining activities will be able to be scheduled, a higher incoming SOC can allow a higher level switch case to be scheduled, preventing future mandatory activities from being scheduled. The MSI approaches which veto to handle the situation where the nominal switch case becomes committed before or during MSI drop mandatory activities. Whenever an activity is vetoed, there is always the risk that it will not be able to be scheduled in a future invocation, more so if the sol type is very tightly time constrained, which is especially true for one of our sol types. Thus, vetoing the nominal switch case can in dropping the activity, accounting for this method's inability to schedule all mandatory activities. The MSI methods that keep the nominal switch case committed and do not try to elevate to a higher level switch case successfully schedule all mandatory activities, as do the guard methods. We see that the Fixed Point guard, Sol Wide guard, and two of the MSI approaches are able to successfully schedule all mandatory activities. As shown in FIG1, the Sol Wide guard and MSI approach using the options Time Offset and Commit in the highest switch group scores closest to the upper bound for the theoretical maximum. Both MSI approaches have increasing switch group scores with increasing incoming SOC since a higher incoming energy will in more energy to schedule a consumptive switch case during MSI. The less time there is to complete all MSI invocations, the more likely it is for the nominal switch case to become committed. Since we give up trying to elevate switch cases and keep the switch case committed if this occurs, fewer switch cases will be elevated. Because our time offset value, X, in FIG11 is quite large (15 minutes), this situation is more likely to occur using the Switch Ready approach to choose when to start MSI, explaining why using Switch Ready in a lower switch score than Time Offset. The Fixed Point guard in a significantly lower switch case score because it checks against a state of charge constraint at a particular time regardless of what occurs during execution. Even if a switch case is being attempted to be scheduled at a completely different time than T N ominal in FIG2, (e.g., because prior activities ended early), the guard constraint will still be enforced at that particular time. Since we simulate activities ending early, more activities will likely complete by T N ominal, causing the energy level to fall under the Minimum SOC Guard value. Unlike the Fixed Point guard, since the the Sol Wide guard checks if there is sufficient energy to schedule a higher level switch case at the time the scheduler is attempting to schedule it, not at a set time, it is better able to consider resources regained from an activity ending early. We also see that using the Fixed Point guard begins to in a lower switch group score with higher incoming SOC levels after the incoming SOC is 80% of the Maximum SOC. Energy is more likely to reach the Maximum SOC constraint with a higher incoming SOC. The energy gained by an activity taking less time than predicted will not be able to be used if the ing energy level would exceed the Maximum SOC. If this occurs, then since the extra energy cannot be used, the energy level may dip below the guard value in FIG5 at time T N ominal while trying to schedule a higher level switch case even if an activity ended sufficiently early, as shown in FIG1. Just-In-Case Scheduling BID3 ) uses a nominal schedule to determine areas where breaks in the schedule are most likely to occur and produces a branching (tree) schedule to cover execution contingencies. Our approaches all (re) schedule on the fly although the guard methods can be vewied as forcing schedule branches based on time and resource availability. BID8 solve RCPSP (resource-constrained project scheduling problem) where all activities that must be scheduled are not known in advance and the scheduler must decide whether or not to perform certain activities of varying resource consumption. Similarly, our scheduler does not know which of the switch cases to schedule in advance, using runtime resource information to drive (re) scheduling. Integrated planning and scheduling can also be considered scheduling disjuncts (chosen based on prevailing conditions (e.g., BID0)) but these methods typically search whereas we are too computationally limited to search. There are many areas for future work. Currently the time guard heavily limits the placement of activities. As we saw, using preferred time to address this issue ed in dropping mandatory activities. Ideally analysis of start time windows and dependencies could determine where an activity could be placed without blocking other mandatory activities. Additionally, in computing the guard for Minimum SOC using the Sol Wide Guard, instead of increasing the guard value by a predetermined fixed amount which could in over-conservatism, binary search via Monte Carlo analysis could more precisely determine the guard amount. Currently we consider only a single switch group per plan, the Mars 2020 rover mission desires support for multiple switch groups in the input instead. Additional work is needed to extend to multiple switch groups. Further exploration of all of the MSI variants is needed. Study of starting MSI invocations if an activity ends early by at least some amount and the switch case is the next activity is planned. We would like to analyze the effects of evenly spacing the MSI invocations in order to avoid relying on throttling and we would like to try disabling rescheduling after MSI is complete until the switch case has been committed and understand if this in major drawbacks. We have studied the effects of time and energy on switch cases, and we would like to extend these approaches and analysis to data volume. We have presented several algorithms to allow a very computationally limited, non-backtracking scheduler to consider a schedule containing required, or mandatory, activities and sets of activities called switch groups where each activity in such sets differs only by its resource consumption. These algorithms strive to schedule the most preferred, which happens to be the most consumptive, activity possible in the set without dropping any other mandatory activity. First, we discuss two guard methods which use different approaches to reserve enough resources to schedule remaining mandatory activities. We then discuss a third algorithm, MSI, which emulates backtracking by reinvoking the scheduler at most once per level of switch case. We present empirical analysis using input sets of activities derived from data on expected planetary rover operations to show the effects of using each of these methods. These implementations and empirical evaluation are currently being evaluated in the context of the Mars 2020 onboard scheduler.
This paper describes three techniques to allow a non-backtracking, computationally limited scheduler to consider a small number of alternative activities based on resource availability.
947
scitldr
Inspired by the modularity and the life-cycle of biological neurons,we introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at lifelong learning in fixed capacity models based on the pruning of neurons of low activity. In this method, an L1 regulator is used to promote the presence of neurons of zero or low activity whose connections to previously active neurons is permanently severed at the end of training. Subsequent tasks are trained using these pruned neurons after reinitialization and cause zero deterioration to the performance of previous tasks. We show empirically that this biologically inspired method leads to state of the art beating or matching current methods of higher computational complexity. Continual learning, the ability of models to learn to solve new tasks beyond what has previously been trained, has garnered much attention from the machine learning community in recent years. The main obstacle for effective continual learning is the problem of catastrophic forgetting: machines trained on new problems forget about the tasks that they were previously trained on. There are multiple approaches to this problem, from employing networks with many submodules to methods which penalize changing the weights of the network that are deemed important for previous tasks. These approaches either require specialized training schemes or still suffer catastrophic forgetting, albeit at a smaller rate. Furthermore, from a biological perspective, the current fixed capacity approaches generally require the computation of a posterior in weight space which is non-local and hence biologically implausible. Motivated by the life-cycle of biological neurons, we introduce a simple continual learning algorithm for fixed capacity networks which can be trained using standard gradient descent methods and suffers zero deterioration on previously learned problems during the training of new tasks. In this method, the only modifications to standard machine learning algorithms are simple and biologically plausible: i.e. a sparsifying L 1 regulator and activation threshold based neural pruning. We demonstrate empirically that these modifications to standard practice lead to state of the art performance on standard catastrophic forgetting benchmarks. Lifelong learning. Prior work addressing catastrophic forgetting generally fall under two categories. In the first category, the model is comprised of many individual modules at each layer and forgetting is prevented either by routing the data through different modules or by successively adding new modules for each new task. This approach often has the advantage of suffering zero forgetting, * Work completed while at New York Unviersity. however, the structure of these networks is specialized. In the case of, the model is not fixed capacity and in the case of training is done using a tournament selection genetic algorithm. In the second category of approaches to lifelong learning the network structure and training scheme are standard, and forgetting is addressed by penalizing changes of weights which are deemed important. These approaches, generally referred to as weight elasticity methods, have the advantage of simpler training schemes but still suffer catastrophic forgetting, albeit at a smaller rate than unconstrained training. Sparsification. While sparsification is a crucial tool that we use, it is not in itself a focus of this work. For accessibility, we use a simple neuron/filter based sparsification scheme which can be thought of as a single iteration variation of. The core idea of our method is to take advantage of the fact that neural networks are vastly over-parametrized. A manifestation of this over-parametrization is through the practice of sparsification, i.e. the compression of neural network with relatively little loss of performance. As an example, it was shown in show that VGG-16 can be compressed by more than 16 times. In this section we first show that given an activation based sparse network, we can leverage the unused capacity of the model to develop a continual learning scheme which suffers no catastrophic forgetting. We then discuss the idea of graceful forgetting to address the tension between sparsification and model performance in the context of lifelong learning. In what follows we will discuss sparsity for fully connected layers by looking at the individual neurons. The same argument goes through identically for individual channels of convolutional layers. Output Figure 1: The partition of an network with neuronal sparsity into active, inactive and interference parts. Let us assume that we have a trained network which is sparse in the sense that only a subset of the neurons of the network are active. Networks with this form of sparsity can be thought of as narrow subnetworks embedded inside the original structure. There are many approaches that aim to train such sparse networks with little loss of performance (e.g. ). We will discuss our sparsification method in detail in §3.2. Fig. 1 shows a cartoon of our approach, where we have a network with activation based neuronal sparsity, where the active and inactive neurons are respectively denoted by blue and grey nodes. Based on the connectivity structure, the weights of the network can also be split into three classes. First, denoted in blue in Fig. 1, we have the active weights W act which connect active nodes to active nodes. Next we have the weights which connect any node to inactive nodes, we call these the free weights W free, denoted in grey. Finally we have the weights which connect the inactive nodes to the active nodes, we call these the interference weights W int, denoted in red dashed lines. A more precise definition of the active and inactive neurons and weights is given in §3.2. The crux of our approach is the simple observation that if all the interference weightsW int are set to zero, the free weightsW free can be changed arbitrarily without causing any change whatsoever to the output of the network. We can therefore utilize these weights to train new tasks without any catastrophic forgetting of the previous tasks. We can further split the free weights into two groups. First, the weights which connect active nodes to inactive nodes. These are the weights that take advantage of previously learned features and are therefore responsible for transfer learning throughout the network. We also have the weights that connect inactive nodes to inactive nodes. These weights can form completely new pathways to the input and train new features. A simple measure of the amount of transfer learning taking place is the number of new active neurons at each layer after the training of subsequent tasks. Given that an efficient sparse training scheme would not need to relearn the features that are already present in the network, the number of new neurons grown at each stage of training is an indicator of the sufficiency of the already learned features for the purposes of the new task. For example, if the features learned at some layer for previous tasks provide sufficient statistics for purposes of a subsequent task, no new neurons need to be trained at this layer during the training of the subsequent task. We will see more of this point in §4. Output architecture. To fully flesh out a continual learning scheme, we need to specify the connectivity structure of the output nodes. There are two intuitive routes that we can take. In order to train a new task, one option is to use a new output layer (i.e. a new head) while saving the previous output layer. This option, demonstrated in Fig. 2 on the left, is known as the multi-head approach and is standard in continual learning. Because each new output layer comes with its own sets of weights which connect to the final hidden layer neurons, this method is not a fully fixed capacity method. Note that in our approach to continual learning, training a multi-head network with a fully depleted core structure, i.e. a network where are no more free neurons left, is equivalent to final layer transfer learning. In scenarios where the output layer of the different tasks are structurally compatible, for example when all tasks are classification on the same number of classes, we can use a single-head approach. Demonstrated in Fig. 2 on the right, in this approach we use the same output layer for all tasks, but for each task, we mask out the neurons of the final hidden layer that were trained on other tasks. In the case of Fig. 2, only green nodes in the final hidden layer are connected to the output for the second task and only blue nodes for the first task. This is equivalent to a dynamic partitioning of the final hidden layer into multiple unequal sized parts, one part for each task. In practice this is done using a multiplicative masking operation with a task dependent mask, denoted in Fig. 2 by dashed lines after the final hidden layer. This structure, being truly fixed, is more restrictive to train than its multi-head counterpart. Because of this, single head continual algorithms were not possible previously, and as far as we are aware, CLNP is the first viable such algorithm. In what follows we will assume that we are using Rectifier Linear Units (ReLU). While we have only tested our methodology with ReLU networks, we expect it to work similarly with other activations. Sparsification. So far in this section we have shown that given a sparse network trained on a number of tasks, we can train the network on new tasks without suffering any catastrophic forgetting. We now discuss the specific scheme that we use to achieve this sparisty, which is similar in spirit to the network trimming approach put forward in Ref.. Our sparsification method is comprised of two parts. First, during the training of each task, we add an L 1 weight regulator to promote sparsity and regulate the magnitude of the weights of the network. This is akin to biological energy requirements for synaptic communication. The coefficient of α of this regulator is a hyperparameter of our approach. We can also gain more control over the amount of sparsity in each layer by choosing a differentα for different layers. The second part of our sparsification scheme is post-training neuron pruning based on the average activity of each neuron. This step is the analogue of long term depression of synaptic connections between neurons without correlated activities. Subsequently at the beginning of training a new task, the connections of these pruned neurons are reinitialized in a manner reminiscent of the lifecycle of biological neurons. Note that most efficient sparsification algorithms include a third part which involves adjusting the surviving weights of the network after pruning. This step is referred to as fine-tuning and is done by retraining the network for a few epochs while only updating the weights which survive sparsification. This causes the model to regain some of its lost performance because of pruning. To achieve a yet higher level of sparsity, one can iterate the pruning and fine-tuning steps multiple times. For simplicity, unless otherwise specified, we only perform one iteration of pruning without the fine tuning step. Partitioning the network. In §3.1, we split the network into active and inactive parts which we define as follows. Given network N, comprised of L layers, we denote the neurons of each layer as N l with l = 1···L. Let us also assume that the network N has been trained on dataset S. In order to find the active and inactive neurons of the network, we compute the average activity over the entire dataset S for each individual neuron. In a network with ReLU activations, we identify the active neurons N. We can therefore view N act l as a compression of the network into a sub-network of smaller width. Based on their connectivity structure, the weights of each layer are again divided into active, free and interference parts, respectively corresponding to the blue, grey and red lines in Fig. 2. Graceful forgetting. While sparsity is crucial in our approach for the training of later tasks, care needs to be taken so as not to overly sparsify and thereby reduce the model's performance. In practice, model sparsity has a similar relationship with generalization as other regularization schemes. As sparsity increases, initially the generalization performance of the model improves. However, as we push our sparsity knobs (i.e. the L 1 regulator α and activity threshold θ) higher, eventually both training and validation accuracy will suffer and the network fails to fit the data properly. This means that in choosing these hyperparameters, we have to make a compromise between model performance and remaining network capacity for future tasks. This brings us to a subject which is often overlooked in lifelong learning literature generally referred to as graceful forgetting. This is the general notion that it would be preferable to sacrifice some accuracy in a controlled manner, if it reduces future catastrophic forgetting of this task and also helps in the training of subsequent tasks. We believe any successful fixed capacity continual learning algorithm needs to implement some form of graceful forgetting scheme. In our approach, graceful forgetting is implemented through the sparsity vs. performance compromise. In other words, after the training of each task, we sparsify the model up to some acceptable level of performance loss (given by a margin parameter m) in a controlled manner. We then move on to subsequent tasks knowing that the model no longer suffers any further deterioration from training future tasks. This has to be contrasted with other weight elasticity approaches which use soft constraints on the weights of the network and cannot guarantee future performance of previously trained tasks. The choice of sparsity hyperparameters is made based on this incarnation of graceful forgetting as follows. We scan over a range of hyperparameters (α, the L 1 weight regulator and ξ, the learning rate) using grid search and note the value of the best validation accuracy across all hyperparameters. We then pick the models which achieve validation accuracy within a margin of m% of this best validation accuracy. The margin parameterm controls how much we are willing to compromise on accuracy to regain capacity and in experiments we take it to be generally in the range of 0.05% to 2% depending on the task. We sparsify the picked models using the highest activation thresholdθ such that the model remains within this margin of the best validation accuracy. We finally pick the hyperparameters which give the highest sparsity among these models. In this way, we efficiently find the optimal hyperparameters α * (m), θ * (m) and ξ * (m) which afford the highest sparsity model with validation accuracy within m% of the highest validation accuracy among all hyperparameters. After pruning away the unused weights and neurons of the model with the hyperparameters chosen as above, we report the test accuracy of the sparsified network. This algorithm for training and hyperparameter grid search does not incur any significant additional computational burden over standard practice. The hyperparameter search is performed in standard fashion, and the additional steps of selecting networks within the acceptable margin, scanning the threshold, and selecting the highest sparsity network only require evaluation and do not include any additional network training. Permuted MNIST. In this experiment, we look at the performance of our approach on ten tasks derived from the MNIST dataset via ten random permutations of the pixels. To compare with previous work, we start with the same structure and hyperparameters as in Ref.: a multi-head MLP architecture with two hidden layers, each with 2000 neurons and ReLU activation and a softmax multi-class crossentropy loss trained with Adam optimizer and batch size 256. In order to make the task more challenging we look at two variations of this structure: For the first variation, we employ only a single-head to demonstrate the viability of our single-head approach. For the second variation we use layers of width 100 instead of 2000. For the first network variation (wide single-head structure), we do a search over the hyperparameters on the first task using a heldout validation set, just as in Ref.. For the remaining tasks, we settle on learning rate of 0.002 and L 1 weight regularization α = 10 −7,10 −5,10 −6 respectively for the first, second and final layers. Finally, when sparsifying after training each task, we allow for graceful forgetting with a small margin ofm = 0.05%. With test error within 0.05% of single task SGD training, CLNP virtually eliminates catastrophic forgetting and achieves an average accuracy of 98.42±0.04 which is just shy of the single task performance of 98.48±0.05 (mean + STD over 5 iterations of the experiment). In the second variation of the network (narrow multi-head structure), perform sparsification with 2 iterations of fine-tuning and specifically choose a graceful forgetting margin (m = 3%) such that the network is saturated (runs out of free neurons) after all 10 tasks have been trained. In this case, our method attains an average of 95.8% over 10 tasks. In both variations of the network, our achieve state of the art performance on networks with comparable size, matching or exceeding prior methods of much higher conceptual and computational complexity (e.g. ). For an exhaustive comparison of these with previous methods see Tab. 2 in. Split CIFAR-10/CIFAR-100. In this experiment, we train an image classifier sequentially, first on CIFAR-10 (task 1) and then on CIFAR-100 split into 10 different tasks, each with 10 classes (tasks 2-11). We employ the same multi-head network used in Ref., and we use two different training schemes comprised of maximum graceful forgetting of m = 1% and m = 2%. The validation accuracy of the 6 tasks after training them sequentially is shown in Fig. 3a. We see that we again achieve state of the art performance. The more ambitious m = 1% scheme (which only allowed for a graceful forgetting of less than 1%) runs out of capacity after the fourth task is trained. We notice that after the model capacity is depleted (tasks 5 and 6 denoted with red dashed lines), the performance of the m = 1% scheme plummets, showing the necessity for unused neurons for the performance of the network. The more moderate forgetting scheme m = 2% (denoted in orange), however, maintains high performance throughout all tasks and does not run out of capacity until final task is trained. We repeated the experiment with a graceful forgetting of m = 4% but this time followed by fine-tuning, i.e. retraining of the remaining weights after pruning. The of this method are given in Fig. 3a in green. We see that here there is virtually no catastrophic forgetting on the first task (the model performs even better after pruning and retraining as has been reported in previous sparsity literature ). The remaining tasks also get a significant boost from this improved sparsification method. This is a simple demonstration of the potential of sparsification based continual learning methods given more advanced sparsification schemes. We also use a wider single-head network for comparison. In Fig. 3b, we can see the number of new channels learned at each layer for each consecutive task. Of note, the first convolutional layer trains new channels only for tasks 1 and 2. The second and third convolutional layers, grow new channels up to task 3 and task 5 respectively. The fourth layer keeps training new channels up to the last task. The fact that the first layer grows no new channels after the second task implies that the features learned during the training of the first two tasks are eemed sufficient for the training of the subsequent tasks. The fact that this sufficiency happens after training more tasks for layers 2 and 3 is a verification of the fact that features learned in lower layers are more general and thus more transferable in comparison with the features of the higher layers which are known to specialize. This observation implies that models which hope to be effective at continual learning need to be wider in the higher layers to accommodate for this lack of transferability of the features at these scales. In this work we have introduced an intuitive lifelong learning method which leverages the over-parametrization of neural networks to train new tasks in the inactive neurons/filters of the network without suffering any catastrophic forgetting in the previously trained tasks. We implemented a controlled way of graceful forgetting by sacrificing some accuracy at the end of the training of each task in order to regain network capacity for training new tasks. We showed empirically that this method leads to which exceed or match the current state-of-the-art while being less computationally intensive. Because of this, we can employ larger models than otherwise possible, given fixed computational resources. Our methodology comes with simple diagnostics based on the number of free neurons left for the training of new tasks. Model capacity usage graphs are informative regarding the transferability and sufficiency of the features of different layers. Using such graphs, we have verified the notion that the features learned in earlier layers are more transferable. We can leverage these diagnostic tools to pinpoint any layers that run out of capacity prematurely, and resolve these bottlenecks in the network by increasing the number of neurons in these layers when moving on to the next task. In this way, our method can expand to accommodate more tasks and compensate for sub-optimal network width choices.
We use simple and biologically motivated modifications of standard learning techniques to achieve state of the art performance on catastrophic forgetting benchmarks.
948
scitldr
There has been an increasing use of neural networks for music information retrieval tasks. In this paper, we empirically investigate different ways of improving the performance of convolutional neural networks (CNNs) on spectral audio features. More specifically, we explore three aspects of CNN design: depth of the network, the use of residual blocks along with the use of grouped convolution, and global aggregation over time. The application context is singer classification and singing performance embedding and we believe the extend to other types of music analysis using convolutional neural networks. The show that global time aggregation helps to improve the performance of CNNs the most. Another contribution of this paper is the release of a singing recording dataset that can be used for training and evaluation. Deploying deep neural networks to solve music information retrieval problems has benefited from advancements in other areas such as computer vision and natural language processing. In this paper, experiments are designed to investigate whether a few of the recent signature advancements in the deep learning community can improve the learning capability of deep neural networks when applied on time-frequency representations. Because time-frequency representations are frequently treated as 2-D images similarly to image input for computer vision models, convolution layers are popular choices as the first processing layers for time-frequency representations in audio and music analysis applications. One of the recent convolutional layer variants is the residual neural network with a bottleneck design, ResNet. Another variant built upon ResNet is to use grouped convolution inside the bottleneck as a generalization of the Inception Net BID12 BID22, ResNeXt. These two variants have enabled more deepening of the convolutional layers of deep neural networks. Most existing music information retrieval research using convolutional neural networks (CNNs), utilizes vanilla convolutional layers with no more than 5 layers. In this paper, the two convolution layer variants mentioned and a deeper architecture with more than 5 convolution layers is proposed and shown to be effective on audio time-frequency representations. Conceptually, convolution layers take care of learning local patterns (neighboring pixels in images or time frame/frequency bins in time-frequency representations) presented in the input matrices. After learning feature maps from the convolution layers, one of the reoccurring issues, when the input is a time-frequency representation, is how to model or capture temporal relations. Recurrent neural networks has been used to solve this problem BID6 BID1 BID3 BID4. Recent developments from natural language processing in attention mechanisms BID0 BID3 BID15 provide a different approach to model temporal dependencies and relations. In this paper, the attention mechanism is viewed as a special case of a global aggregation operation along the timeaxis that has learnable parameters. Typical aggregation operations such as average or max have no learnable parameters. The effects of global aggregation along the time axis using either average, max or the attention mechanism is investigated experimentally. Two specific applications are investigated in this paper: 1) singer classification of monophonic recordings, and 2) singing performance embedding. The goal of singer classification is to predict the singer's identity given an audio recording as input. A finite set of possible singers is considered so this is a classification task. In singer performance embedding the goal is to create an embedding space in which singers with similar styles can be projected to be closer to each other compared to singers with different styles. Ideally, it should be possible to identify "singing style" or "singing characteristics" by examining (and listening to) the clusters formed from the projections of audio recordings onto the embedding space. Many tasks in music and audio analysis can be formulated in a similar way, in which similarity plays an essential role, therefore we believe that the of this paper generalize to other audio and music tasks. The main challenge and interesting point about this application context is how to isolate the "singer effect" from the "song effect". Classic hand-crafted audio features capture general aspects of similarity. When the same song is performed by different singers audio-based similarity tends to be higher than when different songs are performed -i.e the "song effect" BID14. In order to effectively model singing we need to learn a representation that emphasizes singer similarity while at the same time reduces the effect of song similarity. As an analogy, consider the computer vision problem of face identification. When learning representations for this task we want the information about the identity of the face to be minimally affected by the effect of the environment and pose. The interfering "song effect " is even more dominant in the singing voice case than that of the environment/pose effect in face recognition. Extending this analogy with computer vision, singing performance embedding is analogous to the use of learning an embedded space for face verification BID5 BID17. In this approach, an embedded space of faces is learned with the goal of having pictures of the same person close to each other, and having pictures of different persons away from each other in the learned embedding space. This is accomplished by utilizing a siamese neural network instead of a classifier BID5 BID8 BID16. The large amount of identities make the use of a classifier impractical. By learning an embedding space for singing voice audio recordings that places recordings of the same identity closer to each other, and pushes the ones with different identities away from each other, ideally "singing style" or "singing characteristics" can be identified by examining (and listening to) the clusters formed from the embeddings of audio recordings in the learned embedding space. For both the singer identity classification and the singing performance embedding, we employ an architecture that uses CNNs to extract features followed by a global aggregation layer after which fully connected dense layers are used. The difference between the architectures used for these two tasks is that, for singer identity classification, the output layer is the standard softmax layer that outputs classification probabilities for each singer included in the dataset, but for the singing performance embedding, the output layer is a fully connected linear layer that will embed each input sample into a fixed length vector after which a copy of the network is used to construct a siamese architecture to learn the embedding space. Practically, having a model that embeds singing recordings into short fixed length vectors enables the possibility of fastening the similarity comparison of two long spectrogram sequences (differ in lengths) by calculating the Euclidean distance between their fixed length embedding vectors BID16. This allows a large database of singing recordings to be queried by input query singing recordings more efficiently. In order to evaluate the singing performance embedding model in an unbiased way (not biasing towards the collection of songs sang by a singer), a new set of "balanced" singing recordings are gathered and released. The newly released dataset is an addition to the existing DAMP data set of monophonic vocal music performances. The paper is structured as follows. In section 2, the details of the neural network constructing blocks used in the experiments are described. The dataset used and the experiment details are disclosed in section 3. Discussions and are in Sec.4. The neural network architectures used in the experiments follow a general design pattern depicted in Figure 1. The general design pattern is to feed the input time-frequency features as 2-D images into convolutional layers/blocks, then feed the extracted feature maps to a global time-wise aggregation layer. The output from the global time-wise aggregation layer is fed into dense layers, followed by the output layer. The details of each construction block are described below. The basic convolution layer being used is the vanilla convolution layer that has shared weights and tied biases across channels without any modification. The other variant being used in our experiments is the residual network design with the bottleneck block introduced in ResNet. This variant is extended by using the grouped convolutional block, introduced in ResNeXt BID22, on top of the ResNet. Depictions of the vanilla convolution building block, the ResNet, and ResNeXt are shown in FIG0. Let the outlets in FIG0 be y, inlets be x, and f, g, h be convolution operations. The vanilla convolutional block (a) in FIG0 would be y = g(f (x)), while the ResNet bottleneck block (b) is y = x + f (g(h(x))) and the ResNeXt bottleneck block is y = x + Γ(x) with Γ(·) being the grouped convolution consisting of series of sliced f (g(h(·))) operations over the channel axis of the input. Under the ResNeXt configuration, the ResNet configuration is a special case where the cardinality parameter equals 1 BID22. A max pooling layer with pool size, and stride of is placed between convolutional blocks in the following way: The first convolutional layer is followed immediately by a max pooling layer, while for all the remaining layers the max pooling layers are inserted between every two consecutive convolutional layers/blocks. A distinction between the terms convolution layer and block needs to be made here. A convolutional layer refers to a single vanilla convolution layer, while a convolutional block refers to any of the three architecture patterns show in FIG0. In TAB0, in the column for the number of CNN filters, each number represents the number of output channels for each convolutional layer or block, with normal text for layer and bold text for block. Batch normalizations are applied after each non-linearity activation throughout the convolutional layer/blocks. Global Time-Wise Aggregation Layer Dense Layers Output Layer Input Figure 1: An overview of the neural network architecture used in this paper. Before feeding the output of the convolutional layers to the global time-wise aggregation, the 3 − D feature map having the shape (# of channels, # of time frames, # of frequency bins) as their dimensions will be reshaped as 2-D matrices having the shape (# of time frames, # of channels× # of frequency bins) Originally, the attention mechanism was introduced for sequence-to-sequence learning BID0 in an RNN architecture, that allows the prediction at each time-step to access information from every step in the input hidden sequence in a weighted way. Since the experiments done in this paper do not need sequence-to-sequence prediction, the feed-forward version of attention proposed in BID15 is used instead of the original one. The feed-forward attention is formulated as follows: Given the input matrix X ∈ R N ×D representing N frames of D dimensional feature vectors, a weight vector σ ∈ R N over the time-steps is calculated by DISPLAYFORM0 where DISPLAYFORM1 and f is a non-linear function (tanh for the experiments done in this paper), and w ∈ R D and b ∈ R are the learnable parameters, which can be learned by back-propagation. The outputX of the feed-forward attention layer is then calculated viâ and whereX can be considered a weighted average of X with weights σ, determined by the learnable parameters w and b. This attention operation can also be viewed as an aggregation operation over the time-axis similar to max or average. The idea of aggregation over a specific axis could then be generalized by having the feed-forward attention, max and average all in the same family, except that the later two have no learnable parameters. This family of operations is different from the standard max/average pooling in convolution layers, in that the aggregation is global to the scope of the the input sample i.e the aggregation will reduce the dimension of the aggregation axis to 1. A specific realization of the network architecture including both convolutional and the global aggregation parts can be found in Appendix B. DISPLAYFORM2 The two tasks explored in this paper are singer identity classification and singing performance embedding. In terms of experimentation with different hyper parameters and network architectures, the singer classification problem provides clear evaluation criteria in terms of model performance. That way different hyper parameters and architectural choices can be compared to each other. On the other hand, the embedding task allows a more exploratory way to understand the input in the sense that it is the spatial relationships between the embedded samples that are interesting to us. For both tasks, numerical evaluation metrics, as well as plots of the embedded samples from the singing performance embedding are provided in order for readers to examine the both quantitatively and qualitatively. The dataset being used for the singer identity classification is the DAMP dataset 1. The DAMP dataset has a total of 34620 solo singing recordings by 3462 singers with each singer having 10 recordings. The collections of songs sang by each singer are different, and some singers sing the same song multiples times. Therefore the DAMP dataset is "unbalanced", and making it difficult for the learning algorithm not to be biased to the singer-specific collection of songs when learning to predict the singer identity. Therefore an additional dataset with each singer singing the same collection of songs available for training and evaluation is collected and released. The set of added collections of solo singing recordings is named the DAMP-balanced dataset. DAMP-balanced has a total of 24874 singing recordings sang by 5429 singers. The song collection of the DAMP-balanced has 14 songs. The structure of the DAMP-balanced is that the last 4 songs are designed to be the test set, and the first 10 songs could be partitioned into any 6/4 train/validation split (permutation) that the singers in train and validation set sang the same 6/4 songs collections according to the 6/4 split (the number of total recordings for train and validation set are different from split to split, since there are different number of singers that all sang the same 6/4 split for different split). The DAMPbalanced dataset is suitable for the singing performance embedding task while the original DAMP dataset can be used to train singer identity classification algorithms. The song list and detailed descriptions of the DAMP-balanced is provided in Appendix A. The input to the neural networks are time-frequency representations extracted from raw audio signals. These time-frequency representations are obtained by applying the short-time Fourier transform that extracts frequency information for each short time window analyzed. As a , most time-frequency representations take the form of 2-D matrices with one axis representing time while the other axis represents frequencies. The entries [j, i] of the matrix represent the intensity of a particular frequency i corresponding to a particular time frame j. In this paper, the Mel-scaled magnitude spectrogram (Mel-spectrogram) ) is used as the input feature to the neural network. Mel-spectrogram are used as input to neural network tasks in BID3 BID7 BID4. The other common choice of audio time-frequency input, the constant-Q transformed spectrogram (CQT), which is used extensively in music information retrieval tasks BID20 due to its capability of preserving the constant octave relationships between frequency bins (log-scaled frequency bins). Since all neural network configurations using CQT perform worse than their Mel-spectrogram versions, only a few representative of using CQT are shown in TAB0. The reason why CQT works worse is that although the CQT preserves a linear relationships of the intervals of different pitches, the linear relationships do not apply to the distances between different harmonics of one pitch. Since the audio recording being analyzed here only has one single singing voice at each time frame, the constant octave relationship does not help the neural networks learning the time-frequency patterns for singing voices. The audio recordings are all re-sampled to have 22050Hz sampling rate, then the Mel-scaled magnitude spectrograms are obtained using a Fast Fourier Transform (FFT) with a length of 2048 samples, hop size of 512 samples, a Hanning window and 96 Mel-scaled frequency bins. The extracted Melspectrogram is squared to obtain the power spectrogram which is then transformed into decibels (dB). The values below −60dB are clipped to be zero and an offset is added to the whole power spectrogram in order to have values between 0 and 60.For both tasks, each singing performance audio recording is transformed to a Mel-spectrogram as described above. The Mel-spectrogram of each recording is then chopped into overlapping matrices each of which has a duration of 6 seconds (256 time steps) and 20% hop size. For both tasks the gradient decent is optimized by ADAM BID11 ) with a learning rate 0.0001 and a batch size of 32. A drop out of 10% is applied at the last fully connected dense layers. L 2 weight regularizations with a weight 1e − 6 are applied on all the learnable weights in the neural network. The above hyper parameters are chosen by the Bayesian optimization package SPEARMINT BID19. For both tasks, an early stopping test on the validation set is applied every 50 epochs. For the singer identity classification, the patience is 300 epochs with at least 99.5% improvement, and the patience for the singing performance embedding task is 1000. The non-linear activation function used in all convolution layers and fully connected layers is the rectified linear unit activation function. The convolutional filter sizes are for the first convolutional layer and for all subsequent convolutional layers. For the fully connected dense layer, 3 layers with each having 1024 hidden units are used before the last output layer. A subset of 46 singers (23 males and 23 females) corresponding to 460 solo singing recordings from the DAMP dataset is selected for the singer classification problem. A 10-fold cross validation is used to obtain the test accuracies for different models with each fold using 1 recording from each singer as the test set, while the training is performed on the rest 9 recordings with 1 of them selected randomly as the validation set for early stopping. For the classification task we explore different combinations of neural network configurations in terms of using either the vanilla CNN or ResNeXt building blocks. Also different number of layers and different types of aggregation such as max, average, feed-forward attention or no global aggregation are also investigated. A baseline SVM classifier is also included by having the mean and standard deviation of chroma, MFCC, spectral centroid, spectral roll-off, and spectral flux BID2 extracted from each ∼ 6 second clip as the input. The experimental and associated measures of different models are displayed in TAB0. The number of convolution filters is chosen so that the total number of parameters are on the same scale between different configurations. From TAB0, it can be seen that the baseline method achieved 27% accuracy which is above the random prediction of 2.2% (FORMULA0), while all the neural network models far exceeded the baseline by at least 35%. For all the neural network models, the use of any global aggregation method improved the performance by 5% ∼ 10%. Among the neural network models, global aggregation with average or feed-forward attention has slightly better performance than max except for the shallower CNN. For the singing performance embedding experiment, a subset of 6/4/4 train/validation/test split from the DAMP-balanced is used 2. The total number of recordings and singers for this specific split are 276/88/224 and 46/22/56 respectively. We would like an embedding space that places recordings by the same singer closer to each other and pushes recordings by different singers away from each other, and a siamese neural network architecture BID5 BID8 BID16 ) is used. The inner twin neural network is constructed following the same principle described earlier in section 2. The embedding dimension for the linear fully connected output layer is chosen to be 16 by SPEARMINT. Since a siamese network learns the embedding by shortening or lengthening the distance between pairs of embedded vectors based on their label, pairs of samples from the dataset are arranged and labeled. Denote a pair of samples by x 1, x 2 ∈ R D, and y a binary label that equals 1 when x 1, x 2 have the same identity and equals 0 when their identities are different. The distance metric optimized over the siamese network in this experiment is the squared euclidean distance DISPLAYFORM0 then the contrastive loss BID5 BID8 BID16 ) is used as the optimization goal and is defined as DISPLAYFORM1 where G is the non-linear function that represents the neural network and m is the target margin between embedded vectors having different identities, and m = 1 throughout the experiments. To train the siamese networks, pairs of chopped samples from the same singer or different ones are randomly sampled in a 1: 1 ratio and fed into the siamese networks. The contrastive losses on the test set for different network configurations are shown in TAB1. The cardinalities for the ResNeXt configurations in TAB1 are 4.The training and validation error over epochs are plotted in FIG1. The observation from the training/validation plots are that, 1) feed-forward attention and average aggregation tend to overfit the data more than max and no aggregation by looking at training errors, 2) feed-forward attention and average aggregations reach early stopping earlier than max and no aggregation by looking at the best validation epoch, 3) Shallow architectures work slightly better than deeper ones if their number of parameters are on the same scale. Results showing qualitative characteristic of the embedding are shown in Figure 4. In Figure 4, the embeddings of 40 performances sang by 10 singers with each singer sang the same 4 songs from the test split are plotted. The embedding of a performance is obtained by taking the mean of the embeddings from all the chopped input samples of that performance. A comparison is made between the embeddings from the shallow ResNeXt architecture with/without feed-forward attention and the handcrafted features used in the baseline case for singer classification. Both the embeddings and the extracted handcrafted audio features are projected down to a 2-D space by t-SNE BID13. It is obvious that the baseline handcrafted audio feature captured the "song" effect while the learned embeddings from our singing performance embedding experiment were able to group together the performances by the same singers while invariant to the "song" effect. The t-SNE projections of performed 6-second clips before summarized into songs are shown in FIG4 in Appendix B. To have another quantitative assessment of the embeddings, leave-one-out k-nearest neighbor classifications using the embedded 16-dimensional performance vectors are used as training points. For each k and each network configuration, every sample is used as test sample once and the classification accuracies are obtained by averaging over the outcomes of every test sample for all k and network configurations. For the k-nearest neighbor singer classification, all the 224 performances from 56 singers are used. The classification with multiple ks among the shallow ResNeXt configurations with/without feed-forward attention and the handcrafted features are shown in Figure 5. In addition, k-nearest neighbor classifications on performed songs are also conducted to demonstrate the "song effect". From the k-nearest neighbor classification on singers and songs, it is evidence that the "song" effect exists and singing performance embedding learning is able to dilute the "song" effect while extracting features that are more relevant in terms of characterizing singers. Also the feed-forward global aggregation helped the enhancement of "singer style" while reducing "song effect" slightly by looking at the k-nearest neighbor classification accuracies. It is worth mentioning that the k-nearest neighbor classification on performed songs is only possible due to the "balanced" nature of the dataset. In this paper, empirical investigations into how recent developments in the deep learning community could help solving singer identification and embedding problems were conducted. From the experiment , the obvious take away is that global aggregation over time improves performance by a considerable margin in general. The performances among the three aggregation strategies; max, average and feed-forward attention, are very close. The advantage of using feedforward attention from observing the experiment is that it accelerates the learning process compared to other non-learnable global aggregations. One way to explain such observation is that the feed-forward attention layer learns a "frequency template" for each convolutional channel fed into it. These "frequency templates" are encoded in w and enable each convolutional channel fed into it to focus on different parts along the frequency axis (Since w ∈ R D with D = num of channels × num of frequency bins). In this paper we also have shown that training a deep neural networks having more than 15 convolutional layers on time-frequency input is definitely feasible with the help of global time aggregation. To the authors' knowledge, there is no previous music information retrieval research utilizing neural networks having more than 10 convolutional layers. A dataset consisting of over 20000 single singing voice recordings is also released and described in this paper. The released dataset, DAMP-balanced, could be partitioned in a way that for singer classification, the performed songs for each singer are the same. For future works, replacing max-pooling with striding in convolutional layers which recent works in CNN suggest will be experimented. To improve global-aggregation, taking temporal order into consideration during the global-aggregation operation as suggested in BID21 will also be experimented. Also the proposed neural network configurations will be experimented in other music information retrieval tasks such as music structure segmentation. The DAMP-balanced dataset is a separate dataset from the original DAMP, but comes with the same format of metadata as the original DAMP dataset does. The audio recordings and metadata of the DAMP-balanced dataset are collected by querying the internal database of the Sing! Karaoke app hosted by Smule, Inc., which is the same as the original DAMP dataset. The difference between the DAMP and the DAMP-balanced datasets lies at how the querying is done to collect the audio recordings and the metadata. For the original DAMP, 10 singing performances from 3462 Sing! Karaoke app users are randomly selected. There are no specific constraints on the collections of songs performed by each user. As a , each user sang different collections of songs from each other and one song could be sang multiple times by one user. On the contrary, the queries to retrieve audio recordings and metadata for the DAMP-balanced dataset specifically ask for a group of users that all sang one specific collections of songs at least once, with only one performance returned for each song and each user, per query. 14 popular songs (defined as the more times being sang the more popular) over the past year and are listed in TAB2. For the first 10 songs, 210 × 2 queries were created to retrieve audio recordings and metadata that cover all different combinations of splitting the 10 songs into 6/4 song collections. Each query returns a set of users, along with their singing performances and metadata, such that all users in that returned set has only one performance of each of the songs in the specific 6 or 4 song collection. For example, the train/validation sets used in this paper was the first 6 songs in TAB2 as training set and the following 4 songs as validation set. This specific split has 276 performances for training and 88 performances for validation, and that lead to 46 and 22 singers respectively. Different 6/4 split in different number of singers in each set thus making the total number performances of different songs differ from each other. For example, if instead the first 4 songs and the following 6 songs are taken as the 6/4 split of the first 10 songs, the first 4-song collection will have 459 users and 1836 performances while the following 6-song collection having 3 users and 18 performances. The "balanced" structure of the DAMP-balanced allows possible train/validation rotation within the first 10 songs while leaving the last 4 songs as test set, or provides more possible "balanced" test sets for models training on other datasets.
Using deep learning techniques on singing voice related tasks.
949
scitldr
The Softmax function is used in the final layer of nearly all existing sequence-to-sequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models obtain upto 2.5x speed-up in training time while performing on par with the state-of-the-art models in terms of translation quality. These models are capable of handling very large vocabularies without compromising on translation quality. They also produce more meaningful errors than in the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations. Due to the power law distribution of word frequencies, rare words are extremely common in any language BID45 ). Yet, the majority of language generation tasks-including machine translation BID39 BID1 BID24, summarization BID36 BID37 BID30, dialogue generation BID40, question answering BID44, speech recognition BID13 ), and others-generate words by sampling from a multinomial distribution over a closed output vocabulary. This is done by computing scores for each candidate word and normalizing them to probabilities using a softmax layer. Since softmax is computationally expensive, current systems limit their output vocabulary to a few tens of thousands of most frequent words, sacrificing linguistic diversity by replacing the long tail of rare words by the unknown word token, unk. Unsurprisingly, at test time this leads to an inferior performance when generating rare or out-of-vocabulary words. Despite the fixed output vocabulary, softmax is computationally the slowest layer. Moreover, its computation follows a large matrix multiplication to compute scores over the candidate words; this makes softmax expensive in terms of memory requirements and the number of parameters to learn BID26 BID27 BID8. Several alternatives have been proposed for alleviating these problems, including sampling-based approximations of the softmax function BID2 BID26, approaches proposing a hierarchical structure of the softmax layer BID27 BID7, and changing the vocabulary to frequent subword units, thereby reducing the vocabulary size BID38.We propose a novel technique to generate low-dimensional continuous word representations, or word embeddings BID25 BID31 BID4 instead of a probability distribution over the vocabulary at each output step. We train sequence-to-sequence models with continuous outputs by minimizing the distance between the output vector and the pretrained word embedding of the reference word. At test time, the model generates a vector and then searches for its nearest neighbor in the target embedding space to generate the corresponding word. This general architecture can in principle be used for any language generation (or any recurrent regression) task. In this work, we experiment with neural machine translation, implemented using recurrent sequence-to-sequence models BID39 with attention BID1 BID24.To the best of our knowledge, this is the first work that uses word embeddings-rather than the softmax layer-as outputs in language generation tasks. While this idea is simple and intuitive, in practice, it does not yield competitive performance with standard regression losses like 2. This is because 2 loss implicitly assumes a Gaussian distribution of the output space which is likely false for embeddings. In order to correctly predict the outputs corresponding to new inputs, we must model the correct probability distribution of the target vector conditioned on the input BID3. A major contribution of this work is a new loss function based on defining such a probability distribution over the word embedding space and minimizing its negative log likelihood (§3).We evaluate our proposed model with the new loss function on the task of machine translation, including on datasets with huge vocabulary sizes, in two language pairs, and in two data domains (§4). In §5 we show that our models can be trained up to 2.5x faster than softmax-based models while performing on par with state-of-the-art systems in terms of generation quality. Error analysis (§6) reveals that the models with continuous outputs are better at correctly generating rare words and make errors that are close to the reference texts in the embedding space and are often semantically-related to the reference translation. Traditionally, all sequence to sequence language generation models use one-hot representations for each word in the output vocabulary V. More formally, each word w is represented as a unique vector o(w) ∈ {0, 1} V, where V is the size of the output vocabulary and only one entry id(w) (corresponding the word ID of w in the vocabulary) in o(w) is 1 and the rest are set to 0. The models produce a distribution p t over the output vocabulary at every step t using the softmax function: DISPLAYFORM0 where, s w = W hw h t + b w is the score of the word w given the hidden state h produced by the LSTM cell BID15 at time step t. W ∈ R V xH and b ∈ R v are trainable parameters. H is the size of the hidden layer h. These parameters are trained by minimizing the negative log-likelihood (aka cross-entropy) of this distribution by treating o(w) as the target distribution. The loss function is defined as follows: DISPLAYFORM1 This loss computation involves a normalization proportional to the size of the output vocabulary V. This becomes a bottleneck in natural language generation tasks where the vocabulary size is typically tens of thousands of words. We propose to address this bottleneck by representing words as continuous word vectors instead of one-hot representations and introducing a novel probabilistic loss to train these models as described in §3.2 Here, we briefly summarize prior work that aimed at alleviating the sofmax bottleneck problem. We briefly summarize existing modifications to the sofmax layer, capitalizing on conceptually different approaches. Sampling-Based Approximations Sampling-based approaches completely do away with computing the normalization term of softmax by considering only a small subset of possible outputs. These include approximations like Importance Sampling BID2, Noise Constrastive Estimation BID26, Negative Sampling BID25, and Blackout BID16. These alternatives significantly speed-up training time but degrade generation quality. BID27 replace the flat softmax layer with a hierarchical layer in the form of a binary tree where words are at the leaves. This alleviates the problem of expensive normalization, but these gains are only obtained at training time. At test time, the hierarchical approximations lead to a drop in performance compared to softmax both in time efficiency and in accuracy. BID7 propose to divide the vocabulary into clusters based on their frequencies. Each word is produced by a different part of the hidden layer making the output embedding matrix much sparser. This leads to performance improvement both in training and decoding. However, it assigns fewer parameters to rare words which leads to inferior performance in predicting them BID34. BID0; BID11 add additional terms to the training loss which makes the normalization factor close to 1, obviating the need to explicitly normalize. The evaluation of certain words can be done much faster than in softmax based models which is extremely useful for tasks like language modeling. However, for generation tasks, it is necessary to ensure that the normalization factor is exactly 1 which might not always be the case, and thus it might require explicit normalization. BID18 introduce character-based methods to reduce vocabulary size. While character-based models lead to significant decrease in vocabulary size, they often differentiate poorly between similarly spelled words with different meanings. BID38 find a middle ground between characters and words based on sub-word units obtained using Byte Pair Encoding (BPE). Despite its limitations BID28, BPE achieves good performance while also making the model truly open vocabulary. BPE is the state-of-the art approach currently used in machine translation. We thus use this as a baseline in our experiments. In our proposed model, each word type in the output vocabulary is represented by a continuous vector e(w) ∈ R m where m V. This representation can be obtained by training a word embedding model on a large monolingual corpus BID25 BID31 BID4.At each generation step, the decoder of our model produces a continuous vectorê ∈ R m. The output word is then predicted by searching for the nearest neighbor ofê in the embedding space: DISPLAYFORM0 where V is the output vocabulary, d is a distance function. In other words, the embedding space could be considered to be quantized into V components and the generated continuous vector is mapped to a word based on the quanta in which it lies. The mapped word is then passed to the next step of the decoder BID14. While training this model, we know the target vector e(w), and minimize its distance from the output vectorê. With this formulation, our model is directly trained to optimize towards the information encoded by the embeddings. For example, if the embeddings are primarily semantic, as in BID25 or BID4, the model would tend to output words in a semantic space, that is produced words would either be correct or close synonyms (which we see in our analysis in §6), or if we use synactico-semantic embeddings BID22 BID23, we might be able to also control for syntatic forms. We propose a novel probabilistic loss function-a probabilistic variant of cosine loss-which gives a theoretically grounded regression loss for sequence generation and addresses the limitations of existing empirical losses (described in §4.2). Cosine loss measures the closeness between vector directions. A natural choice for estimating directional distributions is von Mises-Fisher (vMF) defined over a hypersphere of unit norm. That is, a vector close to the mean direction will have high probability. VMF is considered the directional equivalent of Gaussian distribution 3. Given a target word w, its density function is given as follows: DISPLAYFORM1 where µ and e(w) are vectors of dimension m with unit norm, κ is a positive scalar, also called the concentration parameter. κ = 0 defines a uniform distribution over the hypersphere and κ = ∞ defines a point distribution at µ. C m (κ) is the normalization term: DISPLAYFORM2 where I v is called modified Bessel function of the first kind of order v. The output of the model at each step is a vectorê of dimension m. We use κ = ê. Thus the density function becomes: DISPLAYFORM3 It is noteworthy that equation 2 is very similar to softmax computation (except that e(w) is a unit vector), the main difference being that normalization is not done by summing over the vocabulary, which makes it much faster than the softmax computation. More details about it's computation are given in the appendix. The negative log-likelihood of the vMF distribution, which at each output step is given by: DISPLAYFORM4 Regularization of NLLvMF In practice, we observe that the NLLvMF loss puts too much weight on increasing ê, making the second term in the loss function decrease rapidly without significant decrease in the cosine distance. To account for this, we add a regularization term. We experiment with two variants of regularization. NLLvMF reg1: We add λ 1 ê to the loss function, where λ 1 is a scalar hyperparameter. 4 This makes intuitive sense in that the length of the output vector should not increase too much. The regularized loss function is as follows: DISPLAYFORM5 We modify the previous loss function as follows: DISPLAYFORM6 − log C m (ê) decreases slowly as ê increases as compared the second term. Adding a λ 2 < 1 the second term controls for how fast it can decrease. We modify the standard seq2seq models in OpenNMT 6 in PyTorch 7 BID19 ) to implement the architecture described in §3. This model has a bidirectional LSTM encoder with an attentionbased decoder BID24. The encoder has one layer whereas the decoder has 2 layers of 3 A natural choice for many regression tasks would be to use a loss function based on Gaussian distribution itself which is a probabilistic version of 2 loss. But as we describe in §4.2, 2 is not considered a suitable loss for regression on embedding spaces 4 We empirically set λ1 = 0.02 in all our experiments 5 We use λ2 = 0.1 in all our experiments 6 http://opennmt.net/ 7 https://pytorch.org/ size 1024 with the input word embedding size of 512. For the baseline systems, the output at each decoder step multiplies a weight matrix (H × V) followed by softmax. This model is trained until convergence on the validation perplexity. For our proposed models, we replace the softmax layer with the continuous output layer (H × m) where the outputs are m dimensional. We empirically choose m = 300 for all our experiments. Additional hyperparameter settings can be found in the appendix. These models are trained until convergence on the validation loss. Out of vocabulary words are mapped to an unk token 8. We assign unk an embedding equal to the average of embeddings of all the words which are not present in the target vocabulary of the training set but are present in vocabulary on which the word embeddings are trained. Following BID10, after decoding a post-processing step replaces the unk token using a dictionary look-up of the word with highest attention score. If the word does not exist in the dictionary, we back off to copying the source word itself. Bilingual dictionaries are automatically extracted from our parallel training corpus using word alignment BID12 9. We evaluate all the models on the test data using the BLEU score BID29.We evaluate our systems on standard machine translation datasets from IWSLT'16 BID6, on two target languages, English: German→English, French→English and a morphologically richer language French: English→French. The training sets for each of the language pairs contain around 220,000 parallel sentences. We use TED Test 2013+2014 (2,300 sentence pairs) as developments sets and TED Test 2015+2016 (2,200 sentence pairs) as test sets respectively for all the language pairs. All mentioned setups have a total vocabulary size of around 55,000 in the target language of which we choose top 50,000 words by frequency as the target vocabulary 10.We also experiment with a much larger WMT'16 German→English BID5 task whose training set contains around 4.5M sentence pairs with the target vocabulary size of around 800,000. We use newstest2015 and newstest2016 as development and test data respectively. Since with continuous outputs we do not need to perform a time consuming softmax computation, we can train the proposed model with very large target vocabulary without any change in training time per batch. We perform this experiment with WMT'16 de-en dataset with a target vocabulary size of 300,000 (basically all the words in the target vocabulary for which we had trained embeddings). But to able to produce these words, the source vocabulary also needs to be increased to have their translations in the inputs, which would lead to a huge increase in the number of trainable parameters. Instead, we use sub-words computed using BPE as source vocabulary. We use 100,000 merge operations to compute the source vocabulary as we observe using a smaller number leads to too small (and less meaningful) sub-word units which are difficult to align with target words. Both of these datasets contain examples from vastly different domains, while IWSLT'16 contains less formal spoken language, WMT'16 contains data primarily from news. We train target word embeddings for English and French on corpora constructed using WMT'16 BID5 monolingual datasets containing data from Europarl, News Commentary, News Crawl from 2007 to 2015 and News Discussion (everything except Common Crawl due to its large memory requirements). These corpora consist of 4B+ tokens for English and 2B+ tokens for French. We experiment with two embedding models: word2vec BID25 and fasttext Bojanowski et al. FORMULA0 which were trained using the hyper-parameters recommended by the authors. We compare our proposed loss function with standard loss functions used in multivariate regression. Squared Error is the most common distance function used when the model outputs are continuous BID21. For each target word w, it is given as L 2 = ê − e(w) 2 2 penalizes large errors more strongly and therefore is sensitive to outliers. To avoid this we use a square rooted version of 2 loss. But it has been argued that there is a mismatch between the objective function used to learn word representations (maximum likelihood based on inner product), the distance measure for word vectors (cosine similarity), and 2 distance as the objective function 8 Although the proposed model can make decoding open vocabulary, there could still be unknown words, e.g., words for which we do not have pre-trained embeddings; we need unk token to represent these words 9 https://github.com/clab/fast_align 10 Removing the bottom 5,000 words did not make a significant difference in terms of translation quality to learn transformations of word vectors BID41. This argument prompts us to look at cosine loss. ê. e(w). This loss minimizes the distance between the directions of output and target vectors while disregarding their magnitudes. The target embedding space in this case becomes a set of points on a hypersphere of dimension m with unit radius. BID20 argue that using pairwise losses like 2 or cosine distance for learning vectors in high dimensional spaces leads to hubness: word vectors of a subset of words appear as nearest neighbors of many points in the output vector space. To alleviate this, we experiment with a margin-based ranking loss (which has been shown to reduce hubness) to train the model to rank the word vector predictionê for target vector e(w) higher than any other word vector e(w) in the embedding space. L mm = w ∈V,w =w max{0, γ + cos(ê, e(w)) − cos(ê, e(w))} where, γ is a hyperparameter 11 representing the margin and w denotes negative examples. We use only one informative negative example as described in BID20 which is closest tô e and farthest from the target word vector e(w). But, searching for this negative example requires iterating over the vocabulary which brings back the problem of slow loss computation. In the case of empirical losses, we output the word whose target embedding is the nearest neighbor to the vector in terms of the distance (loss) defined. In the case of NLLvMF, we predict the word whose target embedding has the highest value of vMF probability density wrt to the output vector. This predicted word is fed as the input for the next time step. Our nearest-neighbor decoding scheme is equivalent to a greedy decoding; we thus compare to baseline models with beam size of 1. Until now we discussed the embeddings in the output layer. Additionally, decoder in a sequenceto-sequence model has an input embedding matrix as the previous output word is fed as an input to the decoder. Much of the size of the trainable parameters in all the models is occupied by these input embedding weights. We experiment with keeping this embedding layer fixed and tied with pre-trained target output embeddings BID33. This leads to significant reduction in the number of parameters in our model. TAB8 shows the BLEU scores on the test sets for several baseline systems, and various configurations including the types of losses, types of inputs/outputs used (word, BPE, or embedding) 12 and whether the model used tied embeddings in the decoder or not. Since we represent each target word by its embedding, the quality of embeddings should have an impact on the translation quality. We measure this by training our best model with fasttext embeddings BID4, which leads to > 1 BLEU improvement. Tied embeddings are the most effective setups: they not only achieve highest translation quality, but also dramatically reduce parameters requirements and the speed of convergence. 11 We use γ = 0.5 in our experiments. 12 Note that we do not experiment with subword embeddings since the number of merge operations for BPE usually depend on the choice of a language pair which would require the embeddings to be retrained for every language pair. TAB3 shows the average training time per batch. In FIG1 (left), we show how many samples per second our proposed model can process at training time compared to the baseline. As we increase the batch size, the gap between the baseline and the proposed models increases. Our proposed models can process large mini-batches while still training much faster than the baseline models. The largest mini-batch size with which we can train our model is 512, compared to 184 in the baseline model. Using max-margin loss leads to a slight increase in the training time compared to NLLvMF. This is because its computation needs a negative example which requires iterating over the entire vocabulary. Since our model requires look-up of nearest neighbors in the target embedding table while testing, it currently takes similar time as that of softmax-based models. In future work, approximate nearest neighbors algorithms BID17 can be used to improve translation time. We also compare the speed of convergence, using BLEU scores on dev data. In FIG1, we plot the BLEU scores against the number of epochs. Our model convergences much faster than the baseline models leading to an even larger improvement in overall training time (Similar figures for more datasets can be found in the appendix). As a , as shown in table 3, the total training time of our proposed model (until convergence) is less than up-to 2.5x of the total training time of the baseline models. Memory Requirements As shown in TAB3 our best performing model requires less than 1% of the number of parameters in input and output layers, compared to BPE-based baselines. Softmax BPE Emb w/ NLL-vMF fr-en 4h 4.5h 1.9h de-en 3h 3.5h 1.5h en-fr 1.8h 2.8h 1.3 WMT de-en 4.3d 4.5d 1.6d Table 3: Total convergence times in hours(h)/days(d). Translation of Rare Words We evaluate the translation accuracy of words in the test set based on their frequency in the training corpus. Table 5 shows how the F 1 score varies with the word frequency. F 1 score gives a balance between recall (the fraction of words in the reference that the predicted sentence produces right) and precision (the fraction of produced words that are in reference). We show substantial improvements over softmax and BPE baselines in translating less frequent and rare words, which we hypothesize is due to having learned good embeddings of such words from the monolingual target corpus where these words are not as rare. Moreover, in BPE based models, rare words on the source side are split in smaller units which are in some cases not properly translated in subword units on the target side if transparent alignments don't exist. For example, the word saboter in French is translated to sab+ot+tate by the BPE model whereas correctly translated as sabotage by our model. Also, a rare word retraite in French in translated to pension by both Softmax and BPE models (pension is a related word but less rare in the corpus) instead of the expected translation retirement which our model gets right. We conducted a thorough analysis of outputs across our experimental setups. Few examples are shown in the appendix. Interestingly, there are many examples where our models do not exactly match the reference translations (so they do not benefit from in terms of BLEU scores) but produce meaningful translations. This is likely because the model produces nearby words of the target words or paraphrases instead of the target word (which are many times synonyms).Since we are predicting embeddings instead of actual words, the model tends to be weaker sometimes and does not follow a good language model and leads to ungrammatical outputs in cases where the baseline model would perform well. Integrating a pre-trained language model within the decoding framework is one potential avenue for our future work. Another reason for this type of errors could be our choice of target embeddings which are not modeled to (explicitly) capture syntactic relationships. Using syntactically inspired embeddings BID22 BID23 might help reduce these errors. However, such fluency errors are not uncommon also in softmax and BPE-based models either. Table 5: Test set unigram F 1 scores of occurrence in the predicted sentences based on their frequencies in the training corpus for different models for fr-en. This work makes several contributions. We introduce a novel framework of sequence to sequence learning for language generation using word embeddings as outputs. We propose new probabilistic loss functions based on vMF distribution for learning in this framework. We then show that the proposed model trained on the task of machine translation leads to reduction in trainable parameters, to faster convergence, and a dramatic speed-up, up to 2.5x in training time over standard benchmarks. TAB5 visualizes a comparison between different types of softmax approximations and our proposed method. State-of-the-art in softmax-based models are highly optimized after a few years on research in neural machine translation. The that we report are comparable or slightly lower than the strongest baselines, but these setups are only an initial investigation of translation with the continuous output layer. There are numerous possible directions to explore and improve the proposed setups. What are additional loss functions? How to setup beam search? Should we use scheduled sampling? What types of embeddings to use? How to translate with the embedding output into morphologically-rich languages? Can low-resource neural machine translation benefit from translation with continuous outputs if large monolingual corpora are available to pre-train strong target-side embeddings? We will explore these questions in future work. Furthermore, the proposed architecture and the probabilistic loss (NLLvMF) have the potential to benefit other applications which have sequences as outputs, e.g. speech recognition. NLLvMF could be used as an objective function for problems which currently use cosine or 2 distance, such as learning multilingual word embeddings. Since the outputs of our models are continuous (rather than class-based discrete symbols), these models can potentially simplify training of generative adversarial networks for language generation. DISPLAYFORM0 where C m (κ) is given as: DISPLAYFORM1 The normalization constant is not directly differentiable because Bessel function cannot be written in a closed form. The gradient of the first component (log (C m ê)) of the loss is given as DISPLAYFORM2. In Table 9: Translation quality experiments using beam search with BPE based baseline models with a beam size of 5With our proposed models, in principle, it is possible to generate candidates for beam search by using K-Nearest Neighbors. But how to rank the partially generated sequences is not trivial (one could use the loss values themselves to rank, but initial experiments with this setting did not in significant gains). In this work, we focus on enabling training with continuous outputs efficiently and accurately giving us huge gains in training time. The question of decoding with beam search requires substantial investigation and we leave it for future work. Une ducation est critique, mais rgler ce problme va ncessiter que chacun d'entre nous s'engage et soit un meilleur exemple pour les femmes et filles dans nos vies. An education is critical, but tackling this problem is going to require each and everyone of us to step up and be better role models for the women and girls in our own lives. Education is critical, but it's going to require that each of us will come in and if you do a better example for women and girls in our lives. Education is critical, but to to do this is going to require that each of us of to engage and or a better example of the women and girls in our lives. That's critical, but that's that it's going to require that each of us is going to take that the problem and they're going to if you're a better example for women and girls in our lives. Predicted (MaxMargin) Education is critical, but that problem is going to require that every one of us is engaging and is a better example for women and girls in our lives. Predicted (NLLvMF reg) Education is critical, but fixed this problem is going to require that all of us engage and be a better example for women and girls in our lives. TAB8: Translation examples. Red and blue colors highlight translation errors; red are bad and blue are outputs that are good translations, but are considered as errors by the BLEU metric. Our systems tend to generate a lot of such "meaningful" errors. Pourquoi ne sommes nous pas de simples robots qui traitent toutes ces donnes, produisent ces rsultats, sans faire l'exprience de ce film intrieur? Reference Why aren't we just robots who process all this input, produce all that output, without experiencing the inner movie at all? Predicted (BPE) Why don't we have simple robots that are processing all of this data, produce these , without doing the experience of that inner movie? Why are we not that we do that that are technologized and that that that's all these , that they're actually doing these , without do the experience of this film inside? Predicted (Cosine) Why are we not simple robots that all that data and produce these data without the experience of this film inside? Predicted (MaxMargin) Why aren't we just simple robots that have all this data, make these , without making the experience of this inside movie? Predicted (NLLvMF reg) Why are we not simple robots that treat all this data, produce these , without having the experience of this inside film? TAB8: Example of fluency errors in the baseline model. Red and blue colors highlight translation errors; red are bad and blue are outputs that are good translations, but are considered as errors by the BLEU metric.
Language generation using seq2seq models which produce word embeddings instead of a softmax based distribution over the vocabulary at each step enabling much faster training while maintaining generation quality
950
scitldr
We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks , we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical . The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using from quantum field theory we show that our framework is able to explain many experimentally observed phenomena, such as training on random labels with zero error , the information bottleneck and the phase transition out of it , shattered gradients , and many more. Deep neural networks have been used in image recognition tasks with great success. The first of its kind, AlexNet BID8, led to many other neural architectures have been proposed to achieve start-of-the-art in image processing at the time. Some of the notable architectures include, VGG BID12, Inception BID14 and Residual networks (ResNet) BID3.Understanding the inner workings of deep neural networks remains a difficult task. It has been discovered that the training process ceases when it goes through an information bottleneck until learning rate is decreased to a suitable amount then the network under goes a phase transition. Deep networks appear to be able to regularize themselves and able to train on randomly labeled data BID18 with zero training error. The gradients in deep neural networks behaves as white noise over the layers BID1. And many other unexplained phenomena. A recent work BID0 showed that the ensemble behavior and binomial path lengths BID15 of ResNets can be explained by just a Taylor series expansion to first order in the decoupling limit. They found that the series approximation generates a symmetry breaking layer that reduces the redundancy of weights, leading to a better generalization. Because the ResNet does not contain such symmetry breaking layers in the architecture. They suggest that ResNets are able to break the symmetry by the communication between the layers. Another recent work also employed the Taylor expansion to investigate ResNets BID6.In statistical terms, a quantum theory describes errors from the mean of random variables. We wish to study how error propagate through each layer in the network, layer by layer. In the limit of a continuous sample space, the quantum theory becomes a quantum field theory. The effects of sampling error and labelling error can then be investigated. It is well known in physics that a scalar field can drive a phase transition. Using a scalar field theory we show that a phase transition must exist towards the end of training based on empirical . It is also responsible for the remarkable performance of deep networks compared to other classical models. In Appendix D, We explain that quantum field theory is likely one of the simplest model that can describe a deep network layer by layer in the decoupling limit. Much of the literature on neural network design focuses on different neural architecture that breaks symmetry explicitly, rather than spontaneously. For instance, non-linear layers explicitly breaks the symmetry of affine transformations. There is little discussion on spontaneous symmetry breaking. In neural networks, the Goldstone theorem in field theory states that for every continuous symmetry that is spontaneously broken, there exists a weight with zero Hessian eigenvalue at the loss minimum. No such weights would appear if the symmetries are explicitly broken. It turns out that many seemingly different experimental can be explained by the presence of these zero eigenvalue weights. In this work, we exploit the layer decoupling limit applicable to ResNets to approximate the loss functions with a power series in symmetry invariant quantities and illustrate that spontaneous symmetry breaking of affine symmetries is the sufficient and necessary condition for a deep network to attain its unprecedented power. The organization of this paper is as follows. The on deep neural networks and field theory is given in Section 2. Section 3 shows that remnant symmetries can exist in a neural network and that the weights can be approximated by a scalar field. Experimental that confirm our theory is given in Section 4. We summarize more evidence from other experiments in Appendix A. A review of field theory is given in Appendix B. An explicit example of spontaneous symmetry breaking is shown in Appendix C. In this section we introduce our frame work using a field theory based on Lagrangian mechanics. A deep neural network consists of layers of neurons. Suppose that the first layer of neurons with weight matrix W 1 and bias b 1 takes input x 1 and outputs y 1 DISPLAYFORM0 where x = (x, 1) and W 1 = (W 1, b 1), where W 1 and b 1 are real valued. Now suppose that R 1 denotes a nonlinear operator corresponding to a sigmoid or ReLU layer located after the weight layer, so that DISPLAYFORM1 For a neural network with T repeating units, the output for layer t is DISPLAYFORM2 We now show the necessary and sufficient conditions of preserving symmetry. We explicitly include symmetry transformations in Equation and investigate the effects caused by a symmetry transformation of the input in subsequent layers. Suppose Q t ∈ G is a transformation matrix in some Lie group G for all t. Note that the Q t are not parameters to be estimated. We write y t = y t (Q t), where the dependence on Q t is obtained from a transformation on the input, x t (Q t) = Q t x t, and the weights, DISPLAYFORM0 If G is a symmetry group, then y t is covariant with x t, such that y t (Q t) = Q t y t. This requires two conditions to be satisfied. First, DISPLAYFORM1 t, where Q −1 t Q t = I and the existence of the inverse is trivial because G is a group and Q t ∈ G. The second is the commutativity between R t and Q t, such that R t Q t = Q t R t. For example, if g t ∈ Aff(D), the group of affine transformations, R t may not commute with g t. However, commutativity is satisfied when the transformation corresponds to the 2D rotation of feature maps. Including transformation matrices, the output at layer t is DISPLAYFORM2 Statistical learning requires the loss function to be minimized. It can be written in the form of a mutual information, training error, or the Kullback-Leibler divergence. In this section we approximate the loss function in the continuum limit of samples and layers. Then we define the loss functional to transition into Lagrangian mechanics and field theory. Let z i = (X i, Y i) ∈ X be the i-th input sample in data set X, (X i, Y i) are the features and the desired outputs, respectively, and i ∈ {1, . . ., N}. The loss function is DISPLAYFORM0 where W = (W 1, . . ., W T), and Q = (Q 1, . . ., Q T), Q t ∈ G where G is a Lie group, and T is the depth of the network. Taking the continuum limit, DISPLAYFORM1 where p(X, Y) is the joint distribution of X and Y. Using the first fundamental theorem of calculus and taking the continuous layers (t) limit, we write DISPLAYFORM2 where L x (t = 0) is the value of the loss before training. We let L x,t = dL x /dt be the loss rate per layer. The loss rate L x,t is bounded from below. Therefore DISPLAYFORM3 Minimizing the loss rate guarantees the minimization of the total loss. We require L x,t to be invariant under symmetry transformations. That is, if DISPLAYFORM4 However if Q 1 (t) and Q 2 (t) do not belong in the same symmetry group, the above equality does not necessarily hold. Now we define the loss functional for a deep neural network DISPLAYFORM5 Having defined the loss functional, we can transition into Lagrangian dynamics to give a description of the feature map flow at each layer. Let the minimizer of the loss rate be DISPLAYFORM0 From now on, we combine z = (X, Y) as Y only appears in W * in this formalism, each Y determines a trajactory for the representation flow determined by Lagrangian mechanics. Now we define, for each i-th element of W(t), and a non-linear operator R(t) acting on W(t) such that the loss minimum is centered at the origin, DISPLAYFORM1 We now define the Lagrangian density, DISPLAYFORM2 and L = T − V, where T is the kinetic energy and V is the potential energy. We define the potential energy to be DISPLAYFORM3 The probability density p(z) and the loss rate L x,t are invariant under symmetry transformations. Therefore V is an invariant as well. DISPLAYFORM4 We now set up the conditions to obtain a series expansion of V around the minimum w i (Q t) = 0. First, since V is an invariant. Each term in the series expansion must be an invariant such that DISPLAYFORM5, the orthogonal group and that w T (Q t) = w T Q T and w(Q t) = Q t w. So w i w i is an invariant. Then f = w i w i is invariant for all Q t where the Einstein summation convention was used DISPLAYFORM6 Now we perform a Taylor series expansion about the minimum w i = 0 of the potential, DISPLAYFORM7 where H i j = ∂ w i ∂ w j V is the Hessian matrix, and similarly for Λ ij mn. The overall constant C can be ignored without loss of generality. Because V is an even function in w i around the minimum, we must have DISPLAYFORM8 The O(D) symmetry enforces that all weight Hessian eigenvalues to be H i i = m 2 /2 for some constant m 2. This can be seen in the O case, with constants a, b, a = b, Q ∈ O such that w 1 (Q) = w 2 and w 2 (Q) = w 1, DISPLAYFORM9, so the O symmetry implies a = b. This can be generalized to the O(D) case. For the quartic term, the requirement that V be even around the minimum gives DISPLAYFORM10 ii ii = λ/4 for some constant λ and zero for any other elements, the potential is DISPLAYFORM0 where the numerical factors were added for convention. The power series is a good approximation in the decoupling limit which may be applicable for Residual Networks. 1 For the kinetic term T, we expand in power series of the derivatives, DISPLAYFORM1 where the coefficient for (∂ t w) 2 is fixed by the Hamiltonian kinetic energy 1 2 (∂ t w) 2. Higher order terms in (∂ t w) 2 are negligible in the decoupling limit. If the model is robust, then higher order terms in (∂ z w) 2 can be neglected as well. 2 The Lagrangian density is DISPLAYFORM2 where we have set w 2 = w i w i and absorbed c into z without loss of generality. This is precisely the Lagrangian for a scalar field in field theory. Standard for a scalar field theory can be found in Appendix B. To account for the effect of the learning rate, we employ from thermal field theory BID7 and we identify the temperature with the learning rate η. So that now DISPLAYFORM3 Spontaneous symmetry breaking describes a phase transition of a deep neural network. Consider the following scalar field potential invariant under O(D) transformations, DISPLAYFORM0 where m 2 (η) = −µ 2 + 1 4 λη 2, µ 2 > 0 and learning rate η. There exists a value of η = η c such that m 2 = 0. In the first phase, η > η c, the loss minimum is at w * 0i = 0, where DISPLAYFORM1 When the learning rate η drops sufficiently low, the symmetry is spontaneously broken and the phase transition begins. The loss minimum bifurcates at η = η c into DISPLAYFORM2 This occurs when the Hessian eigenvalue becomes negative, m 2 (η) < 0, when η < η c.This phenomenon has profound implications. It is responsible for phase transition in neural networks and generates long range correlation between representations and the desired output. Details from field theory can be found in Appendix C. FIG0 depicts the shape of the loss rate during spontaneous symmetry breaking with a single weight w, and the orthogonal group O(D) is reduced to a reflection symmetry O = {1, −1} such that w(Q) = ±w. At η > η c, the loss rate has a loss minima at point A. When the learning rate decreases, such that η < η c, the critical point at A becomes unstable and new minima with equal loss rate are generated. The weight must go through B to get to the new minimum C. If the learning rate is too small, the weight will be stuck near A. This explains why a cyclical learning rate can outperform a monotonic decreasing learning rate BID13.Because the loss rate is invariant still to the sponteneously broken symmetry, any new minima generated from spontaneous symmetry breaking must have the same loss rate. If there is a unbroken continuous symmetry remaining, there would be a connected loss rate surface corresponding to the new minima generated by the unbroken symmetry. Spontaneous symmetry breaking splits the weights into two sets, w → (π, σ). The direction along this degenerate minima in weight space corresponds to π. And the direction in weight space orthogonal to π is σ. This has been shown experimentally by BID2 in FIG0. We show the case for the breaking of O to O in FIG2.2 The kinetic term T is not invariant under transformation Q(t). To obtain invariance ∂tw i is to be replaced by the covariant derivative Dtw i so that (Dtw i) 2 is invariant under Q(t) BID10. The covariant derivative is DISPLAYFORM3 with B(z, t, Qt) = Q(t)B(z, t)Q(t) −1. The new fields B introduced for invariance is not responsible for spontaneous symmetry breaking, the focus of this paper. So we will not consider them further.3 Formally, the ∂zw term should be part of the potential V, as T contains only ∂tw terms. However we adhere to the field theory literature and put the ∂zw term in T with a minus sign. In this section we show that spontaneous symmetry breaking occurs in neural networks. First, we show that learning by deep neural networks can be considered solely as breaking the symmetries in the weights. Then we show that some non-linear layers can preserve symmetries across the nonlinear layers. Then we show that weight pairs in adjacent layers, but not within the same layer, is approximately an invariant under the remnant symmetry leftover by the non-linearities. We assume that the weights are scalar fields invariant under the affine Aff(D) group for some D and find that experimental show that deep neural networks undergo spontaneous symmetry breaking. Theorem 1: Deep feedforward networks learn by breaking symmetries Proof: Let A i be an operator representing any sequence of layers, and let a network formed by applying A i repeatedly such that DISPLAYFORM0. Then x out = Lx in for some L ∈ Aff(D) and x out can be computed by a single affine transformation L. When A i contains a nonlinearity for some i, this symmetry is explicitly broken by the nonlinearity and the layers learn a more generalized representation of the input. Now we show that ReLU preserves some continuous symmetries. Theorem 2: ReLU reduces the symmetry of an Aff(D) invariant to some subgroup Aff(D), where D < D. Proof: Suppose R denotes the ReLU operator with output y t and Q t ∈ Aff(D) acts on the input x t, where R(x) = max(0, x). Let x T x be an invariant under Aff(D) and let DISPLAYFORM1 Note that γ i can be transformed into a negative value as it has passed the ReLU already. Corollary If there exists a group G that commutes with a nonlinear operator R, such that QR = RQ, for all Q ∈ G, then R preserves the symmetry G.Definition: Remnant Symmetry If Q t ∈ G commutes with a non-linear operator R t for all Q t, then G is a remnant symmetry at layer t. For the loss function L i (X i, Y i, W, Q) to be invariant, we need the predicted output y T to be covariant with x i. Similarly for an invariant loss rate L x,t we require y t to be covariant with x t. The following theorem shows that a pair of weights in adjacent layers can be considered an invariant for power series expansion. Theorem 3: Neural network weights in adjacent layers form an approximate invariant Suppose a neural network consists of affine layers followed by a continuous non-linearity, R t, and that the weights at layer t, W t (Q t) = Q t W t Q −1 t, and that Q t ∈ H is a remnant symmetry such that Q t R t = R t Q t. Then w t w t−1 can be considered as an invariant for the loss rate. Proof: Consider x(Q t) = Q t x t, then DISPLAYFORM2 where in the last line Q t R t = R t Q t was used, so y t (Q t) = Q t y t is covariant with x t. Now, x t = R t−1 W t−1 x t−1, so that DISPLAYFORM3 The pair (R t W t)(R t−1 W t−1) can be considered an invariant under the ramnant symmetry at layer t. Let w t = R t W t − R t W * t. Therefore w t w t−1 is an invariant. In the continuous layer limit, w t w t−1 tends to w(t)T w(t) such that w(t) is the first layer and w(t)T corresponds to the one after. Therefore w(t) can be considered as D scalar fields under the remnant symmetry. The remnant symmetry is not exact in general. For sigmoid functions it is only an approximation. The crucial feature for the remnant symmetry is that it is continuous so that strong correlation between inputs and outputs can be generated from spontaneous symmetry breaking. In the following we will only consider exact remnant symmetries. We will state the Goldstone Theorem from field theory without proof. Theorem (Goldstone) For every spontaneously broken continuous symmetry, there exist a weight π with zero eigenvalue in the Hessian m 2 π = 0. In any case, we will adhere to the case where the remnant symmetry is an orthogonal group O(D). Note that W is a D × D matrix and D < D. We choose a subset Γ ∈ R D of W such that Γ T Γ is invariant under Aff(D). Now that we have an invariant, we can write down the Lagrangian for a deep feedforward network for the weights responsible for spontaneous symmetry breaking. Now we can use standard field theory and apply it to deep neural networks. A review for field theory is given in Appendix B. The formalism for spontaneous symmetry breaking is given in Appendix C. In this section we assume that the non-linear operator is a piecewise linear function such as ReLU and set R = I to be the identity and restrict our attention to the symmetry preserving part of R (see theorem 2). Our discussion also applies to other piecewise-linear activation functions. According to the Goldstone theorem, spontaneous symmetry breaking splits the set of weight deviations γ into two sets (σ, π) with different behaviors. Weights π with zero eigenvalues and a spectrum dominated by small frequencies k in its correlation function. 4 The other weights σ, have Hessian eigenvalues µ 2 as the weights before the symmetry is broken. In Appendix C, a standard calculation in field theory shows that the correlation functions of the weights have the form Spontaneous symmetry breaking and the information bottleneck The neural network undergoes a phase transition out of the information bottleneck via spontaneous symmetry breaking described in Section 2.5. Before the phase transition, the weights γ have positive Hessian eigenvalues m 2. After the phase transition, weights π with zero Hessian eigenvalues are generated by spontaneous symmetry breaking. The correlation function for the π weights is concentrated around small values of |k|, see Equation FORMULA36, with ω 0 = |k| for any t. This corresponds to a highly correlated representations across the sample (input) space and layers. Because the loss is minimized, the feature maps across the network is highly correlated with the desired output. And a large correlation across the sample space means that the representations are independent of the input. This is shown in FIG2 of BID11. After phase transition, I(Y ; T) 1 bit for all layers T, and I(X; T) is small even for representations in early layers. DISPLAYFORM0 Gradient variance explosion It has been shown that the variance in weight gradients in the same layer grow by an order of magnitude during the end of training BID11. We also connect this to spontaneous symmetry breaking. As two sets of weights, (σ, π) are generated with different distributions. Considering them as the same object would in a larger variance. We find that neural networks are resilient to overfitting. Recall that the fluctuation in the weights can arise from sampling noise. Then (∂ z w i) 2 can be a measure of model robustness. A small value denotes the weights' resistance to sampling noise. If the network were to overfit, the weights would be very sensitive to sampling error. After spontaneous symmetry breaking, weights at the loss minimum with zero eigenvalues obey the Klein-Gordon equation with m DISPLAYFORM0 The singularity in the correlation function suggests |k| 2 0. The zero eigenvalue weights provide robustness to the model. BID18 referred to this phenomenon as implicit regularization. In this work we solved one of the most puzzling mysteries of deep learning by showing that deep neural networks undergo spontaneous symmetry breaking. This is a first attempt to describe a neural network with a scalar quantum field theory. We have shed light on many unexplained phenomenon observed in experiments, summarized in Appendix A.One may wonder why our theoretical model works so well explaining the experimental with just two parameters. It is due to the decoupling limit such that a power series in the loss function is a good approximation to the network. In our case, the two expansion coefficients are the lowest number of possible parameters that is able to describe the phase transition observed near the end of training, where the performance of the deep network improves drastically. It is no coincidence that our model can explain the empirical observations after the phase transition. In fact, our model can describe, at least qualitatively, the behaviors of phase transition in networks that the decoupling limit may not apply to. This suggests that the interactions with nearby layers are responsible for the phase transition. In this section we summarize other experimental findings that can be explained by the proposed field theory and the perspective of symmetry breaking. Here Q ∈ G acts on the the input and hidden variables x, h, as Qx, Qh.• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by BID2 towards the end of training, see FIG0.• The training error typically drops drastically when learning rate is decreased. This occurs when the learning rate drops below η c, forcing a phase transition so that new minima develop. See FIG0 • A cyclical learning rate BID13 helps to get to the new minimum faster, see Section 2.5.• Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of FIG0. A gradient descent step is taken to further take the weights towards the local minimum. Stochasticity helps the network to generalize better.• When the learning rate is too small to move away from A in FIG0. PReLU's BID4 could move the weight away from A through the training of the non-linearity. This corresponds to breaking the symmetry explicitly in Theorem 1.• Results from are due to spontaneous symmetry breaking, see Section 4.• Deep neural networks can train on random labels with low training loss as feature maps are highly correlated with their respective desired output. BID18 observed that a deep neural network can achieve zero training error on random labels. This shows that small Hessian eigenvalues is not the only condition that determines robustness.• Identity mapping outperforms other skip connections BID5 ) is a of the residual unit's output being small. Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = −µ 2 + 1 4 λη 2.• Skip connection across residual units breaks additional symmetry. Suppose now an identity skip connection connects x 1 and the output of F 2. Now perform a symmetry transformation on x 1 and x 2, Q 1 and Q 2 ∈ G, respectively. Then the output after two residual untis is Qx 3 = Q 1 x 1 + Q 2 x 2 + Q 2 F 2. Neither Q = Q 1 nor Q = Q 2 can satisfy the covariance under G. This is observed by BID9.• The shattered gradient problem BID1. It is observed that the gradient in deep (non-residual) networks is very close to white noise. This is reflected in the exponential in Equation FORMULA56. This effect on ResNet is reduced because of the decoupling limit λ → 0. This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = −µ 2 + 1 4 λη 2. And so a higher oscillation frequency in the correlation function.• In recurrent neural networks, multiplicative gating BID16 combines the input x and the hidden state h by an element-wise product. Their method outperforms the method with an addition x+h because the multiplication gating breaks the covariance of the output. A transformation Qx * Qh = Q(x * h), whereas for addition the output remains covariant Qx + Qh = Q(x + h). In this section we state the relevant in field theory without proof. We use Lagrangian mechanics for fields w(x, t). Equations of motion for fields are the solution to the Euler-Lagrange equation, which is a from the principle of least action. The action, S, is DISPLAYFORM0 where L is the Lagrangian. Define the Lagrangian density DISPLAYFORM1 The action in term of the Lagrangian density is DISPLAYFORM2 The Lagrangian can be written as a kinetic term T, and a potential term V (loss function), DISPLAYFORM3 For a real scalar field w(x, t), DISPLAYFORM4 where we have set the constant c 2 = 1 without loss of generality. The potential for a scalar field that allows spontaneous symmetry breaking has the form DISPLAYFORM5 In the decoupling limit, λ → 0, the equation of motion for w is the Klein-Gordon Equation DISPLAYFORM6 In the limit of m 2 → 0, the Klein-Gordon Equation reduces to the wave equation with solution w(z, t) = e i(ωt−k·z), DISPLAYFORM7 One can treat w as a random variable such that the probability distribution (a functional) of the scalar field w(z, t) is p[w] = exp(−S[w])/Z, where Z is some normalizing factor. The distribution peaks at the solution of the Klein-Gordon equation since it minimizes the action S. Now we can define the correlation function between w(z 1, t 1) and w(z 2, t 2), DISPLAYFORM8 where Dw denotes the integral over all paths from (z 1, t 1) to (z 2, t 2). In the decoupling limit λ → 0, it can be shown that DISPLAYFORM9 where Stokes theorem was used and the term on the boundary of (sample) space is set to zero. The above integral in the exponent is quadratic in w and the integral over Dw can be done in a similar manner to Gaussian integrals. The correlation function of the fields across two points in space and time is w(z 1, t 1)w(z 2, t 2) = G(z 1, t 1, z 2, t 2), where G(z 1, t 1, z 2, t 2) is the Green's function to the Klein-Gordon equation, satisfying DISPLAYFORM10 The Fourier transformation of the correlation function is DISPLAYFORM11 An inverse transform over ω gives DISPLAYFORM12 with ω DISPLAYFORM13 In this section we show that weights π with small, near zero, eigenvalues m 2 π = 1 4 λη 2 are generated by spontaneous symmetry breaking. Note that we can write the Lagrangian in Equation as L = T − V. Consider weights γ that transforms under O(D), from Equation DISPLAYFORM0 When m 2 = −µ 2 + 1 4 λη 2 < 0, it can be shown that in this case the loss minimum is no longer at γ i = 0, but it has a degenerate minima on the surface such that i (γ i) 2 = v, where v = −m 2 /λ. Now we pick a point on this loss minima and expand around it. Write γ i = (π k, v + σ), where k ∈ {1, . . ., D − 1}. Intuitively, the π k fields are in the subspace of degenerate minima and σ is the field orthogonal to π. Then it can be shown that the Lagrangian can be written as DISPLAYFORM1 where, in the weak coupling limit λ → 0, DISPLAYFORM2 V π = O(λ), DISPLAYFORM3 the fields π and σ decouple from each other and can be treated separately. The σ fields satisfy the Klein-Gordon Equation (− m 2)σ = 0, with = ∂ 2 t − ∂ 2 z. The π fields satisfy the waveequation, π = 0. The correlation functions of the weights across sample space and layers, P σ = σ(z, t)σ(z, t) and P π = π(z, t)π(z, t) are the Green's functions of the respective equations of motion. Fourier transforming the correlation functions give P σ,π (t, k) = i 2ω 0 exp − iω 0 t,where ω 0 = |k| 2 + |m 2 σ,π |, and m Even though we formulated our field theory based on the decoupling limit of ResNets, the of infinite correlation is very general and can be applied even if the decoupling limit is not valid. It is a direct of spontaneous symmetry breaking. We state the Goldstone Theorem without proof. Theorem (Goldstone): For every continuous symmetry that is spontaneously broken, a weight π with zero Hessian eigenvalue is generated at zero temperature (learning rate η). In brief, the formalism for spontaneous symmetry breaking is mostly done in quantum field theory. In terms of statistics, quantum mechanics is the study of errors. We also believe that it is a good approximation to deep neural networks in the presence of the non-linear operators. The non-linear operators quantizes the input. Let R denotes the opertor corresponding to a sigmoid, say, then the output is R(W) {0, +1} for the most part. And the negative end of ReLU is zero. Let us take a step back and go through the logical steps to understand that a scalar quantum field theory is perhaps one of the simplest model one can consider to describe a neural network layer by layer, in the decoupling limit. We wish to formulate a dynamical model to describe the weights layer by layer,
Closed form results for deep learning in the layer decoupling limit applicable to Residual Networks
951
scitldr
During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easily crafted adversarial examples. Many investigations have pointed out this fact and different approaches have been proposed to generate attacks while adding a limited perturbation to the original data. The most robust known method so far is the so called C&W attack. Nonetheless, a countermeasure known as fea- ture squeezing coupled with ensemble defense showed that most of these attacks can be destroyed. In this paper, we present a new method we call Centered Initial Attack (CIA) whose advantage is twofold: first, it insures by construc- tion the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process that degrades the quality of attacks. Second, it is robust against recently introduced defenses such as feature squeezing, JPEG en- coding and even against a voting ensemble of defenses. While its application is not limited to images, we illustrate this using five of the current best classifiers on ImageNet dataset among which two are adversarialy retrained on purpose to be robust against attacks. With a fixed maximum perturbation of only 1.5% on any pixel, around 80% of attacks (targeted) fool the voting ensemble defense and nearly 100% when the perturbation is only 6%. While this shows how it is difficult to defend against CIA attacks, the last section of the paper gives some guidelines to limit their impact. Since the skyrocketing of data volumes and parallel computation capacities with GPUs during the last years, deep neural networks (DNN) have become the most effective approaches in solving many machine learning problems in several domains like computer vision, speech recognition, games playing etc. They are even intended to be used in critical systems like autonomous vehicle BID17, BID18. However, DNN as they are currently built and trained using gradient based methods, are very vulnerable to attacks a.k.a. adversarial examples BID1. These examples aim to fool a classifier to make it predict the class of an input as another one, different from the real class, after bringing only a very limited perturbation to this input. This can obviously be very dangerous when it comes to systems where human life is in stake like in self driven vehicles. Companies IT networks and plants are also vulnerable if DNN based intrusion detection systems were to be deployed BID20.Many approaches have been proposed to craft adversarial examples since the publication by Szegedy et al. of the first paper pointing out DNN vulnerability issue BID4. In their work, they generated adversarial examples using box-constrained L-BFGS. Later in BID1, a fast gradient sign method (FGSM) that uses gradients of a loss function to determine in which direction the pixels intensity should be changed is presented. It is designed to be fast not optimize the loss function. Kurakin et al. introduced in BID13 a straightforward simple improvement of this method where instead of taking a single step of size in the direction of the gradient-sign, multiple smaller steps are taken, and the is clipped in the end. Papernot et al. introduced in BID3 an attack, optimized under L0 distance, known as the Jacobian-based Saliency Map Attack (JSMA). Another simple attack known as Deepfool is provided in. It is an untargeted attack technique optimized for the L2 distance metric. It is efficient and produces closer adversarial examples than the L-BFGS approach discussed earlier. Evolutionary algorithms are also used by authors in BID14 to find adversarial example while maintaining the attack close to the initial data. More recently, Carlini and Wagner introduced in BID0 the most robust attack known to date as pointed out in BID5. They consider different optimization functions and several metrics for the maximum perturbation. Their L2-attack defeated the most powerful defense known as distillation BID7. However, authors in BID6 showed that feature squeezing managed to destroy most of the C&W attacks. Many other defenses have been published, like adversarial training BID3, gradient masking BID9, defenses based on uncertainty using dropout BID10 as done with Bayesian networks, based on statistics BID11, BID12, or principal components BID22, BID23. Later, while we were carrying out our investigation, paper BID16 showed that not less than ten defense approaches, among which are the previously enumerated defenses, can be defeated by C&W attacks. It also pointed out that feature squeezing also can be defeated but no thorough investigation actually was presented. Another possible defense but not investigated is based on JPEG encoding when dealing with images. It has never been explicitly attacked even after it is shown in BID13 that most attacks are countered by this defense. Also, to our knowledge, no investigation has been conducted when dealing with ensemble defenses. Actually, attacks transferability between models that is well investigated and demonstrated in BID19 in the presence of an oracle (requesting defense to get labels to train a substitute model) is not guaranteed at all when it is absent. Finally, when the maximum perturbation added to the original data is strictly limited, clipping is needed at the end of training (adversarial crafting) even if C&W attacks are used. The quality of crafted attacks is therefore degraded as the brought perturbation during the training is brutally clipped. We tackle all these points in our work while introducing a new attack we call Centered Initial Attack (CIA). This approach considers the perturbation limits by construction and consequently no alteration is done on the CIA ing examples. To make it clearer for the reader, an example is given below to illustrate the clipping issue. FIG0 shows a comparison between CIA and C&W L2 attack before and after clipping on an example, a guitar targeted as a potpie with max perturbation equal to 4.0 (around 1.5%). The same number of iterations FORMULA4 is considered for both methods. As can be seen on FIG0, CIA generates the best attack with 96% confidence. C&W is almost as good with a score of 95% but it is degraded to 88% after applying the clipping to respect the imposed max perturbation. Avoiding this degradation due to clipping is the core motivation of our investigation. The remaining of this paper is organized as follows. Section I presents some mathematical formulations and the principle of CIA strategy. Then Section II investigates the application of CIA against ensemble defense, feature squeezing and JPEG encoding defenses. Then Section III provides some guidelines to limit the impact of CIA attacks. Finally, we give some possible future investigations in the . Before entering into details of CIA, let us give some useful formulations. A neural network can be seen as a function F (x) = y that accepts an input x and produces an output y. The function F depends actually on some model parameters often called weights an biases. These are the variables that are adjusted during the learning process to fit the training data on one hand and generalize well to unseen data on the other hand. Since they do not change in our models, we omit them in our notations. The input x can be a vector or an array of any dimension. So, without loss of generality, we consider x ∈ n as it can be flattened in any case. So, the i th component of x is noted x i with integer i ∈ [1, n].Since we consider m-class classifiers, the output is calculated using the sof tmax function. The output y = F (x) can be seen as a vector of m probabilities p j with j ∈ [1, m]. The component with the biggest value gives the predicted class C(x). This can be written as: DISPLAYFORM0 We note the output corresponding to the correct class as C c (x). An adversarial examplex is crafted as a non targeted attack so as to get C(x) = C c (x) or a targeted one to get C(x) = t where t is the target class. Crafting an example can then be formulated using a loss function L(F, x) to maximize the probability of getting a class different from the correct one. Cross entropy is used in our work. The adversarial examplex can be written asx = x + δ where δ is the added perturbation. In our work we constrain δ to be within domain [−∆, ∆] as considered for instance in Google Brain-Kaggle competition, ∆ being the maximum perturbation. With the existing approaches, adversarial examples are generated through some iterations then clipped in the end to respect this constraint. With C&W attacks for instance, the loss function includes a norm term δ to minimize perturbation δ but the clipping is still needed as we saw above. The main idea behind Centered Initial Attack is to find for each component i the center x * i of the domain in whichx i is allowed to be (green segment on Figure. 2). This is not trivial since we have to insure at the same time the componentx i to be within another domain [α i, β i] to be valid, α i and β i are respectively the minimum and maximum values that can be taken by the i To find the center of domain definition ofx, three cases are to be considered actually, not four since ∆ i is much smaller than (β i − α i), as can be seen on FIG2. For recall, ∆ i is the i th component of ∆.The three cases are: DISPLAYFORM1 DISPLAYFORM2 Now if we consider a continuous differentiable function g such that: g: → [−1, +1], then we can write every componentx i as: DISPLAYFORM3 This equation can be rewritten using arrays as: DISPLAYFORM4 where operator is the elementwise product and r is a new variable on which we optimize the loss function. Finally, the loss function can be written as: DISPLAYFORM5 No constraint is to be considered on the variable r since it is well defined in domain (−∞, +∞). Any initialization of r is possible but we consider it as zero for simplicity. So, the initial attackx is therefore different from x whereas it is centered in its domain of definition (green segment). This explains the CIA attack name. Regarding g, many continuous functions can be used. For instance we tried three functions: DISPLAYFORM6 Obviously other functions can be considered. In our experiments, as they all lead to similar , we always considered tanh. It is interesting to note that with CIA, we can define a different maximum perturbation from a component x i to another x j. Likewise, it is easy for instance to limit the crafting to only a portion of an image by considering a zero max perturbation on the other regions, without changing anything in the training algorithm. This is an advantage with regard to existing approaches as it is difficult with the current machine learning frameworks to select from the same array only some variables to optimize on. Gradients masking would be a solution but not desired as it is a clipping operation. An example of such partial crafting is displayed on Figure. 3 where only a 50px band on the top and right sides is modified. We generated it using ∆ = 32 to make the difference visible on the paper but the image (spider) is also classified as the target (dog) even with smaller values. Also, any gradient descent optimizer can be used to craft attacks as the case with BID0. Adam BID15 turns out to be the fastest in our experiments. We used it for training and considered 20 iterations, a good compromise between computation time and attacks crafting convergence, for all the adversarial examples crafting. To reproduce the , the Adam hyperparameters to be considered are {learning rate = 0.2, DISPLAYFORM0 Finally, it is worth noting that CIA is not limited to images. It can be used for any type of data with bounded continuous features. In order to check the effectiveness of CIA attacks, we consider mainly targeted attacks as they are more difficult to craft against three different strategies of defense; ensemble defense with many classifiers, feature squeezing, and JPEG encoding. A combination of these defenses is also considered as we will see later. In this paper, we consider only white box attacks where we have full access to defense models parameters. Other works BID19 pointed out the transferability property between models when it is possible to quest the defense classifier and get the labels back to train a substitute model to be used for crafting attacks. When it is the case, an attack generated using the substitute is likely to remain an attack on the defense. When it is not the case however, this transferability is inexistent as we will see below. So, attacking as many models as possible at once is required. In order to check the robustness of CIA in attacking many classifiers at once, we consider the five best classifiers on ImageNet dataset : Inception V3 (IncV3 a), Inception V4 (IncV4), InceptionResnet V2 (IncRes a), adversarialy trained Inception V3(IncV3 b) and adversarialy InceptionResnet V2 (IncRes b). The accuracy of these classifiers is around 80% on the whole ImageNet dataset. The accuracies on 1000 images dataset we consider are showed in TAB0. In this experiment, we attack IncV3 b classifier and present the success rate of targeted attacks and the miss-classification rate of each classifier. The are showed in TAB1. As can be seen on TAB1, the transferability is inexistent whatever the maximum perturbation used when looking at the targeted attacks success rate. However, a small increase in misclassification rate is noticed especially with IncV3 a, raising from 3.9% to 7%. This was verified when attacking any other classifier alone and checking the impact on the others. This demonstrates clearly the need to attack many classifiers at once. To do so, we considered an optimization using a sum of losses, each loss being related to one classifier: DISPLAYFORM0 where F i is the function relative to the i th classifier. A weighed version can be considered to target a classifier more aggressively than another. In our experiments, they are all attacked equally. The are displayed in TAB2. As we can see on TAB2, the success rate of the targeted attacks against the voting ensemble is high, around 80% for ∆ = 4.0 and approaching 100% for ∆ = 16.0. It is also interesting to notice that the success rate of attacking IncV3 b has decreased compared to the case when it was attacked alone. With ∆ = 4.0 for instance, it went from 97.6% to 92,5%. This can be explained by the fact that the gradients are balanced in a way to change the input in a direction that minimizes all the losses at the same time. As a of this section, attacking an ensemble defense using CIA is effective when we have complete access to all defense models (white box attacks). Section 3.3 will show that the transferability to an anknown model, while attacking four among the five, is limited but not negligible (more than 30%). This defense approach has been developed to counter attacks using smoothing filters BID6. The intuition behind this idea was that smoothing removes the sharp changes brought while crafting adversarial examples. There are different feature squeezing possibilities but we consider spatial smoothing in the current study. The other ones will be addressed in future experiments. While adding a filter, one should care about the possible loss of accuracy of the defense classifier. Authors in BID6 showed that a 3x3 filter is a good compromise that gives an effective defense while limiting the loss of accuracy. We noticed it too in the current investigation and therefore considered this kernel size. Also, different smoothing strategies are possible like Gaussian, diagonal, mean, etc. As the are quite similar in our experiments we carried out, we consider the mean filter in the sequel for its calculation simplicity. The filter used for defense can actually be replaced by a convolution layer before the neural network as shown in Figure. 4. Figure 4: spatial smoothing modeling using a convolution layer. As can be seen on Figure 4, adding a convolution layer in a new network that could be represented using a new function F. We can therefore simply craft an adversarial example using this new function. As a start, we conducted an attack against only one network (IncV3 a). The are shown in TAB3. As we can notice in TAB3, the success rate of targeted attacks and the miss-classification rate are nearly 100%. This means that the spatial smoothing based defense is not effective. Once again, the transferability between models is inexistent. An interesting point is to check the success of the same attacks when the defense is not actually using spatial smoothing for sure. Said in other words, we suspect the defense to use spatial smoothing but we are not sure about it. So, we craft attacks as before for a guarantee. Or not! The are showed in TAB4.As we can clearly see, the attacks success rate depleted to only 3.2% and the misclassification rate to 18.2%. The attack is therefore not effective in case of filter based defense uncertainty. This disagrees with the intuition behind spatial smoothing as an efficient defense against attacks presented in BID6. As demonstrated indeed, the filtered adversarial example can be an effective attack but not the unfiltered one! Then, how to overcome this issue for a more robust attack? To answer this question, we consider a hybrid network where both filtered and non filtered inputs are used for optimization as represented on Figure. 5. The loss function to be used is given as a sum of two terms (weighing would be useful for more robustness) as follows: DISPLAYFORM0 where a, b are real positive numbers. We conducted a new experiment using this hybrid loss function and the are given in TAB5. 6.3% TAB5 shows that the success rate is this time very high in both cases: 98.4% in filter based defense and nearly 100% in no filter defense. We conclude that this attack is robust whether filtering is used or not in defense. Another question arises from previous given the lack of transferability of attacks between models. What if an ensemble defense is used and filter use is uncertain? Once again, we consider the sum of all losses, but use the hybrid losses this time as follows: DISPLAYFORM1 where L Hi is the hybrid loss relative to the i th classifier. The are presented in TAB6 Even with ∆ = 4.0 and considering only targeted attacks, the success and miss-classification rates are high, around 50% when all classifiers use filters and much higher (around 80%) when no filters are used. Other experiments we conducted showed that attacks rate for filter based defense can be improved by assigning a greater weight b (twice the weight a) in the hybrid loss equation FORMULA9. An investigation conducted by authors in BID13 showed that most adversarial examples are countered if they are JPEG encoded before classifying them. Lets check if it is the case with CIA attacks. We conducted an attack against IncV3 a and classified the adversarial examples after being JPEG encoded. The encoding uses different compression quality values Q. A higher Q means a better quality of image with a bigger size however since it undergoes less loss and compression. The are displayed on TAB7. TAB7 shows that CIA is robust when performing non targeted attacks as almost 100% of them are successful with Q = 80 and around 50% with Q = 20. The targeted attacks are less successful with the highest score of 20% when Q = 80 and 0% when Q = 20.The is somehow mitigated with regard to targeted attaks . Indeed, one has to keep in mind that a Q = 20 would not be a reasonable defense as this will degrade highly the accuracy of the classifier. Nonetheless, we tried to improve the attacks success score by finding a suitable approximation JPEG transformations. Obviously, JPEG encoding cannot be modeled accurately using a differentiable function that can be included in crafting examples process as we did before with spatial smoothing. As a brief recall, this encoding implies the passage from RGB space to another color space called YCbCr where Y is brightness, Cb and Cr components represent the chrominance. Actually, humans can see considerably more fine detail in the brightness of an image (the Y component) than in the hue and color saturation of an image (the Cb and Cr components). Considering this fact, Cb and Cr can be downsampled by a factor of 2 or 3 without sensitive change of receptivity to human eye. Another fact is the eye not being sensitive to sharp changes in images. So, removing the high frequencies from the spectral space after a DCT (Discrete Cosine Transform) of an image would not affect its quality remarkably too. These are the important facts used when making a JPEG compression. Other steps like dividing the image into blocks, the quantization of frequency components, the encoding of these components and so on are thoroughly well documented for interested readers BID21. We do not take them into account. Our idea for approximating JPEG is represented in FIG5.As shown on FIG5, we first transform RGB images to YCrCb space using a function T (product an sum operations) then we filter each component using a convolution layer. Given the facts enumerated before about filtering high frequencies and down sampling the chrominance, we consider a mean 3x3 kernel for brightness component and a 6x6 kernel for Cr and Cb. Once filtered, the is brought back to RGB space using T −1 before feeding the neural network. The of crafting attacks against IncV3 a using this approximation are given in TAB8. As can be noticed, the are almost the same as those of TAB7. This is a bit untriguing as if all the filters used in the approximation are inexistent. However, a similar remark as in the filterbased defense case can be made. The attacks crafted using JPEG encoding are no longer attacks if JPEG is not used by the defense. This means that the considered approximation of JPEG encoding is not that bad but not enough accurate to give strong attacks. It has obviously to be improved. We are working on it. As we saw previously, defending against attacks is hard as along as the defense can be modeled using a function that could be added to form a new network that can be used for crafting examples. As a , one has to find a transformation that is hardly represented or approximated using simple functions. This is obviously not trivial as we saw with JPEG defense. Another, and more short term realizable defense, is to consider a big number of well performing classifiers including limited accuracy variance transformations like spatial smoothing. As we saw in TAB6, the success rate decreased to around 50% when using five classifiers including filters for defense. This is not guaranteed but it worth being investigated. For instance, when we attack all the classifiers except IncV4, the transferability is somehow limited. Indeed, the success rate of attacks is almost 100% against the attacked classifiers whereas it is only around 34% for IncV4 as can be noticed on TAB0. In this paper we presented a new strategy called CIA for crafting adversarial examples while insuring the maximum perturbation added to the original data to be smaller than a fixed threshold. We demonstrated also its robustness against some defenses, feature squeezing, ensemble defenses and even JPEG encoding. For future work, it would be interesting to investigate the transferability of CIA attacks to the physical world as it is shown in BID13 that only a very limited amount of FGDM attacks, around 20%, survive this transfer. Another interesting perspective is to consider partial crafting attacks while selecting regions taking into account the content of the data. With regard to images for instance, this would be interesting to hide attacks with big but imperceptible perturbations.
In this paper, a new method we call Centered Initial Attack (CIA) is provided. It insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process.
952
scitldr
Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point. Furthermore, prior work can only handle queries that use conjunctions ($\wedge$) and existential quantifiers ($\exists$). Handling queries with logical disjunctions ($\vee$) remains an open problem. Here we propose query2box, an embedding-based framework for reasoning over arbitrary queries with $\wedge$, $\vee$, and $\exists$ operators in massive and incomplete KGs. Our main insight is that queries can be embedded as boxes (i.e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query. We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative that handling disjunctions would require embedding with dimension proportional to the number of KG entities. However, we show that by transforming queries into a Disjunctive Normal Form, query2box is capable of handling arbitrary logical queries with $\wedge$, $\vee$, $\exists$ in a scalable manner. We demonstrate the effectiveness of query2box on two large KGs and show that query2box achieves up to 25% relative improvement over the state of the art. Knowledge graphs (KGs) capture different types of relationships between entities, e.g., Canada citizen −−−−→ Hinton. Answering arbitrary logical queries, such as "where did Canadian citizens with Turing Award graduate?", over such KGs is a fundamental task in question answering, knowledge base reasoning, as well as AI more broadly. First-order logical queries can be represented as Directed Acyclic Graphs (DAGs) (Fig. 1(A) ) and be reasoned according to the DAGs to obtain a set of answers (Fig. 1(C) ). While simple and intuitive, such approach has many drawbacks: Computational complexity of subgraph matching is exponential in the query size, and thus cannot scale to modern KGs; Subgraph matching is very sensitive as it cannot correctly answer queries with missing relations. To remedy one could impute missing relations (; Džeroski, 2009; ;) but that would only make the KG denser, which would further exacerbate issue (; Krompaß et al., 2014). Recently, a promising alternative approach has emerged, where logical queries as well as KG entities are embedded into a low-dimensional vector space such that entities that answer the query are embedded close to the query (; ;). Such approach robustly handles missing relations and is also orders of magnitude faster, as answering an arbitrary logical query is reduced to simply identifying entities nearest to the embedding of the query in the vector space. However, prior work embeds a query into a single point in the vector space. This is problematic because answering a logical query requires modeling a set of active entities while traversing the KG (Fig. 1(C) ), and how to effectively model a set with a single point is unclear. Furthermore, it We then obtain query embedding according to the computation graph (B) as a sequence of box operations: start with two nodes TuringAward and Canada and apply Win and Citizen projection operators, followed by an intersection operator (denoted as a shaded intersection of yellow and orange boxes) and another projection operator. The final embedding of the query is a green box and query's answers are the entities inside the box. is also unnatural to define logical operators (e.g., set intersection) of two points in the vector space. Another fundamental limitation of prior work is that it can only handle conjunctive queries, a subset of first-order logic that only involves conjunction (∧) and existential quantifier (∃), but not disjunction (∨). It remains an open question how to handle disjunction effectively in the vector space. Here we present QUERY2BOX, an embedding-based framework for reasoning over KGs that is capable of handling arbitrary Existential Positive First-order (EPFO) logical queries (i.e., queries that include any set of ∧, ∨, and ∃) in a scalable manner. First, to accurately model a set of entities, our key idea is to use a closed region rather than a single point in the vector space. Specifically, we use a box (axis-aligned hyper-rectangle) to represent a query (Fig. 1(D) ). This provides three important benefits: Boxes naturally model sets of entities they enclose; Logical operators (e.g., set intersection) can naturally be defined over boxes similarly as in Venn diagrams (Venn, 1880); Executing logical operators over boxes in new boxes, which means that the operations are closed; thus, logical reasoning can be efficiently performed in QUERY2BOX by iteratively updating boxes according to the query computation graph (Fig. 1(B)(D) ). We show that QUERY2BOX can naturally handle conjunctive queries. We first prove a negative that embedding EPFO queries to only single points or boxes is intractable as it would require embedding dimension proportional to the number of KG entities. However, we provide an elegant solution, where we transform a given EPFO logical query into a Disjunctive Normal Form (DNF) , i.e., disjunction of conjunctive queries. Given any EPFO query, QUERY2BOX represents it as a set of individual boxes, where each box is obtained for each conjunctive query in the DNF. We then return nearest neighbor entities to any of the boxes as the answers to the query. This means that to answer any EPFO query we first answer individual conjunctive queries and then take the union of the answer entities. We evaluate QUERY2BOX on standard KG benchmarks and show: QUERY2BOX provides strong generalization as it can answer complex queries that it has never seen during training; QUERY2BOX is robust as it can answer any EPFO query with high accuracy even when relations involving answering the query are missing in the KG; QUERY2BOX provides up to 25% relative improvement in accuracy of answering EPFO queries over state-of-the-art baselines. Most related to our work are embedding approaches for multi-hop reasoning over KGs (; ; ;). Crucial difference is that we provide a way to tractably handle a larger subset of the first-order logic (EPFO queries vs. conjunctive queries) and that we embed queries as boxes, which provides better accuracy and generalization. Second line of related work is on structured embeddings, which associate images, words, sentences, or knowledge base concepts with geometric objects such as regions (; ;), densities (; ;), and orderings (; ;). While the above work uses geometric objects to model individual entities and their pairwise relations, we use the geometric objects to model sets of entities and reason over those sets. In this sense our work is also related to classical Venn Diagrams (Venn, 1880), where boxes are essentially the Venn Diagrams in vector space, but our boxes and entity embeddings are jointly learned, which allows us to reason over incomplete KGs. Here we present the QUERY2BOX, where we will define an objective function that allows us to learn embeddings of entities in the KG, and at the same time also learn parameterized geometric logical operators over boxes. Then given an arbitrary EPFO query q (Fig. 1(A) ), we will identify its computation graph (Fig. 1(B) ), and embed the query by executing a set of geometric operators over boxes (Fig. 1(D) ). Entities that are enclosed in the final box embedding are returned as answers to the query (Fig. 1(D) ). In order to train our system, we generate a set of queries together with their answers at training time and then learn entity embeddings and geometric operators such that queries can be accurately answered. We show in the following sections that our approach is able to generalize to queries and query structures never seen during training. We denote a KG as G = (V, R), where v ∈ V represents an entity, and r ∈ R is a binary function r: V × V → {True, False}, indicating whether the relation r holds between a pair of entities or not. In the KG, such binary output indicates the existence of the directed edge between a pair of entities, i.e., v Conjunctive queries are a subclass of the first-order logical queries that use existential (∃) and conjunction (∧) operations. They are formally defined as follows. where v a represents non-variable anchor entity, V 1,..., V k are existentially quantified bound variables, V? is the target variable. The goal of answering the logical query q is to find a set of entities q ⊆ V such that v ∈ q iff q[v] = True. We call q the denotation set (i.e., answer set) of query q. As shown in Fig. 1(A), the dependency graph (DG) is a graphical representation of conjunctive query q, where nodes correspond to variable or non-variable entities in q and edges correspond to relations in q. In order for the query to be valid, the corresponding DG needs to be a Directed Acyclic Graph (DAG), with the anchor entities as the source nodes of the DAG and the query target V? as the unique sink node . From the dependency graph of query q, one can also derive the computation graph, which consists of two types of directed edges that represent operators over sets of entities: • Projection: Given a set of entities S ⊆ V, and relation r ∈ R, this operator obtains • Intersection: Given a set of entity sets {S 1, S 2, . . ., S n}, this operator obtains ∩ n i=1 S i. For a given query q, the computation graph specifies the procedure of reasoning to obtain a set of answer entities, i.e., starting from a set of anchor nodes, the above two operators are applied iteratively until the unique sink target node is reached. The entire procedure is analogous to traversing KGs following the computation graph . So far we have defined conjunctive queries as computation graphs that can be executed directly over the nodes and edges in the KG. Now, we define logical reasoning in the vector space. Our intuition follows Fig. 1: Given a complex query, we shall decompose it into a sequence of logical operations, and then execute these operations in the vector space. This way we will obtain the embedding of the query, and answers to the query will be entities that are enclosed in the final query embedding box. In the following, we detail our two methodological advances: the use of box embeddings to efficiently model and reason over sets of entities in the vector space, and how to tractably handle disjunction operator (∨), expanding the class of first-order logic that can be modeled in the vector space (Section 3.3). Box embeddings. To efficiently model a set of entities in the vector space, we use boxes (i.e., axis-aligned hyper-rectangles). The benefit is that unlike a single point, the box has the interior; thus, if an entity is in a set, it is natural to model the entity embedding to be a point inside the box. Formally, we operate on R d, and define a box in R d by p = (Cen(p), Off(p)) ∈ R 2d as: where is element-wise inequality, Cen(p) ∈ R d is the center of the box, and Off(p) ∈ R d ≥0 is the positive offset of the box, modeling the size of the box. Each entity v ∈ V in KG is assigned a single vector v ∈ R d (i.e., a zero-size box), and the box embedding p models {v ∈ V : v ∈ Box p}, i.e., a set of entities whose vectors are inside the box. For the rest of the paper, we use the bold face to denote the embedding, e.g., embedding of v is denoted by v. Our framework reasons over KGs in the vector space following the computation graph of the query, as shown in Fig. 1 (D): we start from the initial box embeddings of the source nodes (anchor entities) and sequentially update the embeddings according to the logical operators. Below, we describe how we set initial box embeddings for the source nodes, as well as how we model projection and intersection operators (defined in Sec. 3.1) as geometric operators that operate over boxes. After that, we describe our entity-to-box distance function and the overall objective that learns embeddings as well as the geometric operators. Initial boxes for source nodes. Each source node represents an anchor entity v ∈ V, which we can regard as a set that only contains the single entity. Such a single-element set can be naturally modeled by a box of size/offset zero centerd at v. Formally, we set the initial box embedding as (v, 0), where v ∈ R d is the anchor entity vector and 0 is a d-dimensional all-zero vector. Geometric projection operator. We associate each relation r ∈ R with relation embedding r = (Cen(r), Off(r)) ∈ R 2d with Off(r) 0. Given an input box embedding p, we model the projection by p + r, where we sum the centers and sum the offsets. This gives us a new box with the translated center and larger offset because Off(r) 0, as illustrated in Fig. 2 (A). The adaptive box size effectively models a different number of entities/vectors in the set. Geometric intersection operator. We model the intersection of a set of box embeddings {p 1, . . ., p n} as p inter = (Cen(p inter), Off(p inter)), which is calculated by performing attention over the box centers and shrinking the box offset using the sigmoid function: where is the dimension-wise product, MLP(·): is the sigmoid function, DeepSets(·) is the permutation-invariant deep architecture , and both Min(·) and exp(·) are applied in a dimension-wise manner. , we model all the deep sets by DeepSets({x 1, . . ., where all the hidden dimensionalities of the two MLPs are the same as the input dimensionality. The intuition behind our geometric intersection is to generate a smaller box that lies inside a set of boxes, as illustrated in Fig. 2 1 Different from the generic deep sets to model the intersection , our geometric intersection operator effectively constrains the center position and models the shrinking set size. Entity-to-box distance. Given a query box q ∈ R 2d and an entity vector v ∈ R d, we define their distance as where and 0 < α < 1 is a fixed scalar, and As illustrated in Fig. 2(C), dist outside corresponds to the distance between the entity and closest corner/side of the box. Analogously, dist inside corresponds to the distance between the center of the box and its side/corner (or the entity itself if the entity is inside the box). The key here is to downweight the distance inside the box by using 0 < α < 1. This means that as long as entity vectors are inside the box, we regard them as "close enough" to the query center (i.e., dist outside is 0, and dist inside is scaled by α). When α = 1, dist box reduces to the ordinary L 1 distance, i.e., Cen(q) − v 1, which is used by the conventional TransE as well as prior query embedding methods . Training objective. Our next goal is to learn entity embeddings as well as geometric projection and intersection operators. Given a training set of queries and their answers, we optimize a negative sampling loss to effectively optimize our distance-based model : where γ represents a fixed scalar margin, v ∈ q is a positive entity (i.e., answer to the query q), and v i / ∈ q is the i-th negative entity (non-answer to the query q) and k is the number of negative entities. So far we have focused on conjunctive queries, and our aim here is to tractably handle in the vector space a wider class of logical queries, called Existential Positive First-order (EPFO) queries that involve ∨ in addition to ∃ and ∧. We specifically focus on EPFO queries whose computation graphs are a DAG, same as that of conjunctive queries (Section 3.1), except that we now have an additional type of directed edge, called union defined as follows: • Union: Given a set of entity sets {S 1, S 2, . . ., S n}, this operator obtains A straightforward approach here would be to define another geometric operator for union and embed the query as we did in the previous sections. An immediate challenge for our box embeddings is that boxes can be located anywhere in the vector space, so their union would no longer be a simple box. In other words, union operation over boxes is not closed. Theoretically, we can prove a general negative for any embedding-based method that maps query q into q such that dist(v; q) ≤ β iff v ∈ q. Here, dist(v; q) is the distance between entity and query embeddings, e.g., dist box (v; q) or v − q 1, and β is a fixed threshold. Theorem 1. Consider any M conjunctive queries q 1,..., q M whose denotation sets q 1,..., q M are disjoint with each other, ∀ i = j, q i ∩ q j = ∅. Let D be the VC dimension of the function class {sign(β − dist(·; q)): q ∈ Ξ}, where Ξ represents the query embedding space and sign(·) is the sign function. Then, we need D ≥ M to model any EPFO query, i.e., dist(v; q) ≤ β ⇔ v ∈ q is satisfied for every EPFO query q. The proof is provided in Appendix A, where the key is that the introduction of the union operation forces us to model the powerset {∪ q i ∈S q i : S ⊆ {q 1, . . ., q M}} in a vector space. For a real-world KG, there are M ≈ |V| conjunctive queries with non-overlapping answers. For example, in the commonly-used FB15k dataset , derived from the Freebase , we find M = 13,365, while |V| is 14,951 (see Appendix B for the details). Theorem 1 shows that in order to accurately model any EPFO query with the existing framework, the complexity of the distance function measured by the VC dimension needs to be as large as the number of KG entities. This implies that if we use common distance functions based on hyper-plane, Enclidian sphere, or axis-aligned rectangle, 2 their parameter dimensionality needs to be Θ(M), which is Θ(|V|) for real KGs we are interested in. In other words, the dimensionality of the logical query embeddings needs to be Θ(|V|), which is not low-dimensional; thus not scalable to large KGs and not generalizable in the presence of unobserved KG edges. To rectify this issue, our key idea is to transform a given EPFO query into a Disjunctive Normal Form (DNF) , i.e., disjunction of conjunctive queries, so that union operation only appears in the last step. Each of the conjunctive queries can then be reasoned in the low-dimensional space, after which we can aggregate the by a simple and intuitive procedure. In the following, we describe the transformation to DNF and the aggregation procedure. Transformation to DNF. Any first-order logic can be transformed into the equivalent DNF . We perform such transformation directly in the space of computation graph, i.e., moving all the edges of type "union" to the last step of the computation graph. Let G q = (V q, E q) be the computation graph for a given EPFO query q, and let V union ⊂ V q be a set of nodes whose in-coming edges are of type "union". For each v ∈ V union, define P v ⊂ V q as a set of its parent nodes. We first generate N = v∈Vunion |P v | different computation graphs G q,..., G q (N) as follows, each with different choices of v parent in the first step. 1. For every v ∈ V union, select one parent node v parent ∈ P v. 2. Remove all the edges of type'union.' 3. Merge v and v parent, while retaining all other edge connections. We then combine the obtained computation graphs G q,..., G q (N) as follows to give the final equivalent computation graph. 1. Convert the target sink nodes of all the obtained computation graphs into the existentially quantified bound variables nodes. 2. Create a new target sink node V?, and draw directed edges of type "union" from all the above variable nodes to the new target node. An example of the entire transformation procedure is illustrated in Fig. 3. By the definition of the union operation, our procedure gives the equivalent computation graph as the original one. Furthermore, as all the union operators are removed from G q,..., G q (N), all of these computation graphs represent conjunctive queries, which we denote as q,..., q (N). We can then apply existing framework to obtain a set of embeddings for these conjunctive queries as q,..., q (N). Aggregation. Next we define the distance function between the given EPFO query q and an entity v ∈ V. Since q is logically equivalent to q ∨ · · · ∨ q (N), we can naturally define the aggregated distance function using the box distance dist box: where dist agg is parameterized by the EPFO query q. When q is a conjunctive query, i.e., N = 1, dist agg (v; q) = dist box (v; q). For N > 1, dist agg takes the minimum distance to the closest box as the distance to an entity. This modeling aligns well with the union operation; an entity is inside the union of sets as long as the entity is in one of the sets. Note that our DNF-query rewriting scheme is general and is able to extend any method that works for conjunctive queries (e.g., ) to handle more general class of EPFO queries. Computational complexity. The computational complexity of answering an EPFO query with our framework is equal to that of answering the N conjunctive queries. In practice, N might not be so large, and all the N computations can be parallelized. Furthermore, answering each conjuctive query is very fast as it requires us to execute a sequence of simple box operations (each of which takes constant time) and then perform a range search in the embedding space, which can also be done in constant time using techniques based on Locality Sensitive Hashing . Our goal in the experiment section is to evaluate the performance of QUERY2BOX on discovering answers to complex logical queries that cannot be obtained by traversing the incomplete KG. This means, we will focus on answering queries where one or more missing edges in the KG have to be successfully predicted in order to obtain the additional answers. Figure 4: Query structures considered in the experiments, where anchor entities and relations are to be specified to instantiate logical queries. Naming for each query structure is provided under each subfigure, where'p','i', and'u' stand for'projection','intersection', and'union', respectively. Models are trained on the first 5 query structures, and evaluated on all 9 query structures. Table 2: Number of training, validation, and test queries generated for different query structures. We perform experiments on standard KG benchmarks, FB15k and FB15k-237 . Both are subsets of Freebase , a large-scale KG containing general facts. Dataset statistics are shown in Table 1. We follow the standard evaluation protocol in KG literture: Given the standard split of edges into training, test, and validation sets (Table 1), we first augment the KG to also include inverse relations and effectively double the number of edges in the graph. We then create three graphs: G train, which only contains training edges and we use this graph to train node embeddings as well as box operators. We then also generate two bigger graphs: G valid which contains G train plus the validation edges, and G test, which includes G valid as well as the test edges. We consider 9 kinds of diverse query structures shown and named in Fig. 4. We use 5 query structures for training and then evaluate on all the 9 query structures. Given a query q, let q train, q val, and q test denote a set of answer entities obtained by running subgraph matching of q on G train, G valid, and G test, respectively. (We refer the reader to Appendix C for full details on query generation.) At the training time, we use q train as positive examples for the query and other random entities as negative examples (Table 2). However, at the test/validation time we proceed differently. Note that we focus on answering queries where generalization performance is crucial and at least one edge needs to be imputed in order to answer the queries. Thus, rather than evaluating a given query on the full validation (or test) set q val (q test) of answers, we validate the method only on non-trivial answers that include missing relations. Given how we constructed G train ⊆ G valid ⊆ G test, we have q train ⊆ q val ⊆ q test and thus we evaluate the method on q val \ q train to tune hyper-parameters and then report identifying answer entities in q test \ q val. This means we always evaluate on queries/entities that were not part of the training set and the method has not seen them before. Given a test query q, for each of its non-trivial answers v ∈ q test \ q val, we use dist box in Eq. 3 to rank v among V\ q test. Denoting the rank of v by Rank(v), we then calculate evaluation metrics for answering query q, such as Mean Reciprocal Rank (MRR) and Hits at K (H@K): where f metrics (x) = We then average Eq. 6 over all the queries within the same query structure, 3 and report the separately for different query structures. The same evaluation protocol is applied to the validation stage except that we evaluate on q val \ q train rather than q test \ q val. We compare our framework QUERY2BOX against the state-of-the-art GQE . GQE embeds a query to a single vector, and models projection and intersection operators as translation and deep sets , respectively. The L 1 distance is used as the distance between query and entity vectors. For a fair comparison, we also compare with GQE-DOUBLE (GQE with doubled embedding dimensionality) so that QUERY2BOX and GQE-DOUBLE have the same amount of parameters. Although the original GQE cannot handle EPFO queries, we apply our DNF-query rewriting strategy and in our evaluation extend GQE to handle general EPFO queries as well. Furthermore, we perform extensive ablation study by considering several variants of QUERY2BOX (abbreviated as Q2B). We list our method as well as its variants below. • Q2B (our method). The box embeddings are used to model queries, and the attention mechanism is used for the intersection operator. • Q2B-AVG. The attention mechanism for intersection is replaced with averaging. • Q2B-DEEPSETS. The attention mechanism for intersection is replaced with the deep sets. • Q2B-AVG-1P. The variant of Q2B-AVG that is trained with only 1p queries (see Fig. 4); thus, logical operators are not explicitly trained. • Q2B-SHAREDOFFSET. The box offset is shared across all queries (every query is represented by a box with the same size). We use embedding dimensionality of d = 400 and set γ = 24, α = 0.2 for the loss in Eq. 4. We train all types of training queries jointly. In every iteration, we sample a minibatch size of 512 queries for each query structure (details in Appendix C), and we sample 1 answer entity and 128 negative entities for each query. We optimize the loss in Eq. 4 using Adam Optimizer with learning rate = 0.0001. We train all models for 250 epochs, monitor the performance on the validation set, and report the test performance. We start by comparing our Q2B with state-of-the-art query embedding method GQE on FB15k and FB15k-237. As listed in Table 3 and Table 4, our method significantly and consistently outperforms the state-of-the-art baseline across all the query structures, including those not seen during training as well as those with union operations. On average, we obtain Table 6: H@3 on test set for QUERY2BOX vs. several of its variants on FB15k. 9.8% (25% relative) and 3.8% (15% relative) higher H@3 than the best baselines on FB15k and FB15k-237, respectively. Notice that naïvely increasing embedding dimensionality in GQE yields limited performance improvement. Our Q2B is able to effectively model a set of entities by the box embedding, and achieves a large performance gain compared with GQE-DOUBLE (with same number of parameters) that represents queries as point vectors. Also notice that Q2B performs well on new queries with the same structure as the training queries as well as on new query structures never seen during training. We also conduct extensive ablation studies, which are summarized in Tables 5 and 6: Importance of attention mechanism. First, we show that our modeling of intersection using the attention mechanism is important. Given a set of box embeddings {p 1, . . ., p n}, Q2B-AVG is the most naïve way to calculate the center of the ing box embedding p inter while Q2B-DEEPSETS is too flexible and neglects the fact that the center should be a weighted average of Cen(p 1),..., Cen(p n). Compared with the two methods, Q2B achieves better performance in answering queries that involve intersection operation, e.g., 2i, 3i, pi, ip. Specifically, on FB15k-237, Q2B obtains more than 4% and 2% absolute gain in H@3 compared to Q2B-AVG and Q2B-DEEPSETS, respectively. Necessity of training on complex queries. Second, we observe that explicitly training on complex logical queries beyond one-hop path queries (1p in Fig. 4) improves the reasoning performance. Although Q2B-AVG-1P is able to achieve strong performance on 1p and 2u, where answering 2u is essentially answering two 1p queries with an additional minimum operation (see Eq. 5 in Section 3.3), Q2B-AVG-1P fails miserably in answering other types of queries involving logical operators. On the other hand, other methods (Q2B, Q2B-AVG, and Q2B-DEEPSETS) that are explicitly trained on the logical queries achieve much higher accuracy, with up to 10% absolute average improvement of H@3 on FB15k. Adaptive box size for different queries. Third, we investigate the importance of learning adaptive offsets (box size) for different queries. Q2B-SHAREDOFFSET is a variant of our Q2B where all the box embeddings share the same learnable offset. Q2B-SHAREDOFFSET does not work well on all types of queries. This is possibly because different queries have different numbers of answer entities, and the adaptive box size enables us to better model it. In this paper we proposed a reasoning framework called QUERY2BOX that can effectively model and reason over sets of entities as well as handle EPFO queries in a vector space. Given a logical query, we first transform it into DNF, embed each conjunctive query into a box, and output entities closest to their nearest boxes. Our approach is capable of handling all types of EPFO queries scalably and accurately. Experimental on standard KGs demonstrate that QUERY2BOX significantly outperforms the existing work in answering diverse logical queries. Proof. To model any EPFO query, we need to at least model a subset of EPFO queries Q = {∨ q i ∈S q i : S ⊆ {q 1, . . ., q M}}, where the corresponding denotation sets are {∪ q i ∈S q i : S ⊆ {q 1, . . ., q M}}. For the sake of modeling Q, without loss of generality, we consider assigning a single entity embedding v q i to all v ∈ q i, so there are M kinds of entity vectors, v q1,..., v q M. To model all queries in Q, it is necessary to satisfy the following. where q S is the embedding of query ∨ q i ∈S q i. Eq. 7 means that we can learn the M kinds of entity vectors such that for every query in Q, we can obtain its embedding to model the corresponding set using the distance function. Notice that this is agnostic to the specific algorithm to embed query ∨ q∈S q into q S; thus, our is generally applicable to any method that embeds the query into a single vector. Crucially, satisfying Eq. 7 is equivalent to {sign(β − dist(·; q)): q ∈ Ξ} being able to shutter {v q1, . . ., v q M}, i.e., any binary labeling of the points can be perfectly fit by some classifier in the function class. To sum up, in order to model any EPFO query, we need to at least model any query in Q, which requires the VC dimension of the distance function to be larger than or equal to M. Given the full KG G test for the FB15k dataset, our goal is to find conjunctive queries q 1,..., q M such that q 1,..., q M are disjoint with each other. For conjunctive queries, we use two types of queries:'1p' and'2i' whose query structures are shown in Figure 4. On the FB15k, we instantiate 308,006 queries of type'1p', which we denote by S 1p. Out of all the queries in S 1p, 129,717 queries have more than one answer entities, and we denote such a set of the queries by S 1p. We then generate a set of queries of type'2i' by first randomly sampling two queries from S 1p and then taking conjunction; we denote the ing set of queries by S 2i. Now, we use S 1p and S 2i to generate a set of conjunctive queries whose denotation sets are disjoint with each other. First, we prepare two empty sets V seen = ∅, and Q = ∅. Then, for every q ∈ S 1p, if V seen ∩ q = ∅ holds, we let Q ← Q ∪ {q} and V seen ← V seen ∪ q. This procedure already gives us Q, where we have 10, 812 conjunctive queries whose denotation sets are disjoint with each other. We can further apply the analogous procedure for S 2i, which gives us a further increased Q, where we have 13, 365 conjunctive queries whose denotation sets are disjoint with each other. Therefore, we get M = 13, 365. Given G train, G valid, and G test as defined in Section 4.1, we generate training, validation and test queries of different query structures. During training, we consider the first 5 kinds of query structures. For evaluation, we consider all the 9 query structures in Fig. 4, containing query structures that are both seen and unseen during training time. We instantiate queries in the following way. Given a KG and a query structure (which is a DAG), we use pre-order traversal to assign an entity and a relation to each node and edge in the DAG of query structure to instantiate a query. Namely, we start from the root of the DAG (which is the target node), we sample an entity e uniformly from the KG to be the root, then for every node connected to the root in the DAG, we choose a relation r uniformly from the in-coming relations of e in the KG, and a new entity e from the set of entities that reaches e by r in the KG. Then we assign the relation r to the edge and e to the node, and move on the process based on the pre-order traversal. This iterative process stops after we assign an entity and relation to every node and edge in DAG. The leaf nodes in the DAG serve as the anchor nodes. Note that during the entity and relation assignment, we specifically filter out all the degenerated queries, as shown in Fig. C. Then we perform a post-order traversal of the DAG on the KG, starting from the anchor nodes, to obtain a set of answer entities to this query. All of our generated datasets will be made publicly available. Figure 5: Example of the degenerated queries, including r and r −1 appear along one path and same anchor node and relation in intersections. When generating validation/test queries, we explicitly filter out trivial queries that can be fully answered by subgraph matching on G train /G valid. We perform additional experiments on NELL995, which is presented in. Query generation and statistics. , we first combine the validation and test sets with the training set to create the whole knowledge graph for NELL995. Then we create new validation and test set splits by randomly selecting 20,000 triples each from the whole knowledge graph. Note that we filter out all the entities that only appear in the validation and test sets but not in the training set. The statistics of NELL995 are shown in Table 11. Based on the new splits, we sample queries in the same way as in FB15k and FB15k-237. The statistics of the queries are listed in Table 12. Results. The comparing query2box and its baselines (GQE and the query2box variants) are shown in Tables 13, 14, 15, 16. Overall, we see the follow the similar trend as the two FB15k datasets; our query2box outperforms GQE as well as its variants by a large margin.
Answering a wide class of logical queries over knowledge graphs with box embeddings in vector space
953
scitldr
Driven by the need for parallelizable hyperparameter optimization methods, this paper studies open loop search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions. In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from k-DPPs defined over any space from which uniform samples can be drawn, including spaces with a mixture of discrete and continuous dimensions or tree structure. Our experiments show significant benefits in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel. Hyperparameter values-regularization strength, model family choices like depth of a neural network or which nonlinear functions to use, procedural elements like dropout rates, stochastic gradient descent step sizes, and data preprocessing choices-can make the difference between a successful application of machine learning and a wasted effort. To search among many hyperparameter values requires repeated execution of often-expensive learning algorithms, creating a major obstacle for practitioners and researchers alike. In general, on iteration (evaluation) k, a hyperparameter searcher suggests a d-dimensional hyperparameter configuration x k ∈ X (e.g., X = R d but could also include discrete dimensions), a worker trains a model using x k, and returns a validation loss of y k ∈ R computed on a hold out set. In this work we say a hyperparameter searcher is open loop if x k depends only on {x i} k−1 i=1; examples include choosing x k uniformly at random BID3, or x k coming from a low-discrepancy sequence (c.f., BID13). We say a searcher is closed loop if x k depends on both the past configurations and validation losses {(x i, y i)} k−1 i=1; examples include Bayesian optimization BID23 and reinforcement learning methods BID30. Note that open loop methods can draw an infinite sequence of configurations before training a single model, whereas closed loop methods rely on validation loss feedback in order to make suggestions. While sophisticated closed loop selection methods have been shown to empirically identify good hyperparameter configurations faster (i.e., with fewer iterations) than open loop methods like random search, two trends have rekindled interest in embarrassingly parallel open loop methods: 1) modern deep learning model are taking longer to train, sometimes up to days or weeks, and 2) the rise of cloud resources available to anyone that charge not by the number of machines, but by the number of CPU-hours used so that 10 machines for 100 hours costs the same as 1000 machines for 1 hour. This paper explores the landscape of open loop methods, identifying tradeoffs that are rarely considered, if at all acknowledged. While random search is arguably the most popular open loop method and chooses each x k independently of {x i} k−1 i=1, it is by no means the only choice. In many ways uniform random search is the least interesting of the methods we will discuss because we will advocate for methods where x k depends on {x i} k−1 i=1 to promote diversity. In particular, we will focus on k i=1 from a k-determinantal point process (DPP) BID18. We introduce a sampling algorithm which allows DPPs to support real, integer, and categorical dimensions, any of which may have a tree structure, and we describe connections between DPPs and Gaussian processes (GPs).In synthetic experiments, we find our diversity-promoting open-loop method outperforms other open loop methods. In practical hyperparameter optimization experiments, we find that it significantly outperforms other approaches in cases where the hyperparameter values have a large effect on performance. Finally, we compare against a closed loop Bayesian optimization method, and find that sequential Bayesian optimization takes, on average, more than ten times as long to find a good , for a gain of only 0.15 percent accuracy on a particular hyperparameter optimization task. Open source implementations of both our hyperparameter optimization algorithm (as an extension to the hyperopt package BID4) and the MCMC algorithm introduced in Algorithm 2 are available. 1 2 RELATED WORK While this work focuses on open loop methods, the vast majority of recent work on hyperparameter tuning has been on closed loop methods, which we briefly review. Much attention has been paid to sequential model-based optimization techniques such as Bayesian optimization BID5 BID23 which sample hyperparameter spaces adaptively. These techniques first choose a point in the space of hyperparameters, then train and evaluate a model with the hyperparameter values represented by that point, then sample another point based on how well previous point(s) performed. When evaluations are fast, inexpensive, and it's possible to evaluate a large number of points (e.g. k = Ω(2 d) for d hyperparameters) these approaches can be advantageous, but in the more common scenario where we have limited time or a limited evaluation budget, the sequential nature of closed loop methods can be cumbersome. In addition, it has been observed that many Bayesian optimization methods with a moderate number of hyperparameters, when run for k iterations, can be outperformed by sampling 2k points uniformly at random, indicating that even simple open loop methods can be competitive. Parallelizing Bayesian optimization methods has proven to be nontrivial, though many agree that it's vitally important. While many algorithms exist which can sample more than one point at each iteration BID7 BID8 BID9 BID14, the sequential nature of Bayesian optimization methods prevent the full parallelization open loop methods can employ. Even running two iterations (with batches of size k/2) will take on average twice as long as fully parallelizing the evaluations, as you can do with open loop methods like grid search, sampling uniformly, or sampling according to a DPP.One line of research has examined the use of k-DPPs for optimizing hyperparameters in the context of parallelizing Bayesian optimization BID15 BID27. At each iteration within one trial of Bayesian optimization, instead of drawing a single new point to evaluate from the posterior, they define a k-DPP over a relevance region from which they sample a diverse set of points. They found their approach to beat state-of-the-art performance on a number of hyperparameter optimization tasks, and they proved that generating batches by sampling from a k-DPP has better regret bounds than a number of other approaches. They show that a previous batch sampling approach which selects a batch by sequentially choosing a point which has the highest posterior variance BID7 is just approximating finding the maximum probability set from a k-DPP (an NP-hard problem BID18), and they prove that sampling (as opposed to maximization) has better regret bounds for this optimization task. We use the work of BID15 as a foundation for our exploration of fully-parallel optimization methods, and thus we focus on k-DPP sampling as opposed to maximization. So-called configuration evaluation methods have been shown to perform well by adaptively allocating resources to different hyperparameter settings BID26 BID19. They initially choose a set of hyperparameters to evaluate (often uniformly), then partially train a set of models for these hyperparameters. After some fixed training budget (e.g., time, or number of training examples observed), they compare the partially trained models against one another and allocate more resources to those which perform best. Eventually, these algorithms produce one (or a small number) of fully trained, high-quality models. In some sense, these approaches are orthogonal to open vs. closed loop methods, as the diversity-promoting approach we advocate can be used as a drop-in replacement to the method used to choose the initial hyperparameter assignments. GPs have long been lauded for their expressive power, and have been used extensively in the hyperparameter optimization literature. BID10 show that drawing a sample from a k-DPP with kernel K is equivalent to sequentially sampling k times proportional to the (updated) posterior variance of a GP defined with covariance kernel K. This sequential sampling is one of the oldest hyperparameter optimization algorithms, though our work is the first to perform an in-depth analysis. Additionally, this has a nice information theoretic justification: since the entropy of a Gaussian is proportional to the log determinant of the covariance matrix, points drawn from a DPP have probability proportional to exp(information gain), and the most probable set from the DPP is the set which maximizes the information gain. With our MCMC algorithm presented in Algorithm 2, we can draw samples with these appealing properties from any space for which we can draw uniform samples. The ability to draw k-DPP samples by sequentially sampling points proportional to the posterior variance grants us another boon: if one has a sample of size k and wants a sample of size k + 1, only a single additional point needs to be drawn, unlike with the sampling algorithms presented in BID18. Using this approach, we can draw samples up to k = 100 in less than a second on a machine with 32 cores. As discussed above, recent trends have renewed interest in open loop methods. While there exist many different batch BO algorithms, analyzing these in the open loop regime (when there are no from function evaluations) is often rather simple. As there is no information with which to update the posterior mean, function evaluations are hallucinated using the prior or points are drawn only using information about the posterior variance. For example, in the open loop regime, BID14's approach without hallucinated observations is equivalent to uniform sampling, and their approach with hallucinated observations (where they use the prior mean in place of a function evaluation, then update the posterior mean and variance) is equivalent to sequentially sampling according to the posterior variance (which is the same as sampling from a DPP). Similarly, open loop optimization in SMAC BID12 ) is equivalent to first Latin hypercube sampling to make a large set of diverse candidate points, then sampling k uniformly among these points. Recently, uniform sampling was shown to be competitive with sophisticated closed loop methods for modern hyperparameter optimization tasks like optimizing the hyperparameters of deep neural networks, inspiring other works to explain the phenomenon BID0. BID3 offer one of the most comprehensive studies of open loop methods to date, and focus attention on comparing random search and grid search. A main takeaway of the paper is that uniform random sampling is generally preferred to grid search 2 due to the frequent observation that some hyperparameters have little impact on performance, and random search promotes more diversity in the dimensions that matter. Essentially, if points are drawn uniformly at random in d dimensions but only d < d dimensions are relevant, those same points are uniformly distributed (and just as diverse) in d dimensions. Grid search, on the other hand, distributes configurations aligned with the axes so if only d < d dimensions are relevant, many configurations are essentially duplicates. However, grid search does have one favorable property that is clear in just one dimension. If k points are distributed on on a grid, the maximum spacing between points is equal to 1 k−1. But if points are uniformly at random drawn on, the expected largest gap between points scales as DISPLAYFORM0 ) is a point on the grid for ij = 0, 1,..., m for all j, with a total number of grid points equal to (m + 1) d.bad luck, the optimum islocated in this largest gap, this difference could be considerable; we attempt to quantify this idea in the next section. Quantifying the spread of a sequence x = (x 1, x 2, . . ., x k) (or, similarly, how well x covers a space) is a well-studied concept. In this section we introduce discrepancy, a quantity used by previous work, and dispersion, which we argue is more appropriate for optimization problems. Perhaps the most popular way to quantify the spread of a sequence is star discrepancy. One can interpret the star discrepancy as a multidimensional version of the Kolmogorov-Smirnov statistic between the sequence x and the uniform measure; intuitively, when x contains points which are spread apart, star discrepancy is small. We include a formal definition in Appendix A.Star discrepancy plays a prominent role in the numerical integration literature, as it provides a sharp bound on the numerical integration error through the the Koksma-Hlawka inequality (given in Appendix B) BID11. This has led to wide adoption of low discrepancy sequences, even outside of numerical integration problems. For example, BID3 analyzed a number of low discrepancy sequences for some optimization tasks and found improved optimization performance over uniform sampling and grid search. Additionally, low discrepancy sequences such as the Sobol sequence 3 are used as an initialization procedure for some Bayesian optimization schemes BID23. Previous work on open loop hyperparameter optimization focused on low discrepancy sequences BID3 BID6, but optimization performance-how close a point in our sequence is to the true, fixed optimum-is our goal, not a sequence with low discrepancy. As discrepancy doesn't directly bound optimization error, we turn instead to dispersion DISPLAYFORM0 where ρ is a distance (in our experiments L 2 distance). Intuitively, the dispersion of a point set is the radius of the largest Euclidean ball containing no points; dispersion measures the worst a point set could be at finding the optimum of a space. Following BID21, we can bound the optimization error as follows. Let f be the function we are aiming to optimize (maximize) with domain B, m(f) = sup x∈B f (x) be the global optimum of the function, and m k (f ; x) = sup 1≤i≤k f (x i) be the best-found optimum from the set x. Assuming f is continuous (at least near the global optimum), the modulus of continuity is defined as DISPLAYFORM1 Theorem 1. BID21 For any point set x with dispersion d k (x), the optimization error is bounded as DISPLAYFORM2 Dispersion can be computed efficiently (unlike discrepancy, D k (x), which is NP-hard BID29 ), and we give an algorithm in Appendix C. Dispersion is at least Ω(k −1/d), and while low discrepancy implies low BID3 found that the Niederreiter and Halton sequences performed similarly to the Sobol sequence, and that the Sobol sequence outperformed Latin hypercube sampling. Thus, our experiments include the Sobol sequence (with the Cranley-Patterson rotation) as a representative low-discrepancy sequence. not hold. 4 Therefore we know that the low-discrepancy sequences evaluated in previous work are also low-dispersion sequences in the big-O sense, but as we will see they may behave quite differently. Samples drawn uniformly are not low dispersion, as they have rate (ln(k)/k) 1/d BID29. DISPLAYFORM3 Optimal dispersion in one dimension is found with an evenly spaced grid, but it's unknown how to get an optimal set in higher dimensions.5 Finding a set of points with the optimal dispersion is as hard as solving the circle packing problem in geometry with k equal-sized circles which are as large as possible. Dispersion is bounded from below with DISPLAYFORM4 it is unknown if this bound is sharp. In FIG4 we plot the dispersion of the Sobol sequence, samples drawn uniformly at random, and samples drawn from a k-DPP, in one and two dimensions. To generate the k-DPP samples, we sequentially drew samples proportional to the (updated) posterior variance (using an RBF kernel, with σ = √ 2/k), as described in Section 2.2. When d = 1, the regular structure of the Sobol sequence causes it to have increasingly large plateaus, as there are many "holes" of the same size. For example, the Sobol sequence has the same dispersion for 42 ≤ k ≤ 61, and 84 ≤ k ≤ 125. Samples drawn from a k-DPP appear to have the same asymptotic rate as the Sobol sequence, but they don't suffer from the plateaus. When d = 2, the k-DPP samples have lower average dispersion and lower variance. One other natural surrogate of average optimization performance is to measure the distance from a fixed point, say 2 ) or from the origin, to the nearest point in the length k sequence. Our experiments (in Appendix D) on these metrics show the k-DPP samples bias samples to the corners of the space, which can be beneficial when the practitioner defined the search space with bounds that are too small. Note, the low-discrepancy sequences are usually defined only for the d hypecrube, so for hyperparameter search which involves conditional hyperparameters (i.e. those with tree structure) they are not appropriate. In what follows, we study the k-DPP in more depth and how it performs on real-world hyperparameter tuning problems. We begin by reviewing DPPs and k-DPPs. Let B be a domain from which we would like to sample a finite subset. (In our use of DPPs, this is the set of hyperparameter assignments.) In general, B could be discrete or continuous; here we assume it is discrete with N values, and we define Y = {1, . . ., N} to be a a set which indexes B (this index set will be particularly useful in Algorithm 1). In Section 4.2 we address when B has continuous dimensions. A DPP defines a probability distribution over 2 Y (all subsets of Y) with the property that two elements of Y are more (less) likely to both be chosen the more dissimilar (similar) they are. Let random variable Y range over finite subsets of Y.There are several ways to define the parameters of a DPP. We focus on L-ensembles, which define the probability that a specific subset is drawn (i.e., P (Y = A) for some A ⊂ Y) as: DISPLAYFORM0 As shown in BID18, this definition of L admits a decomposition to terms representing the quality and diversity of the elements of Y. For any y i, y j ∈ Y, let: DISPLAYFORM1 4 Discrepancy is a global measure which depends on all points, while dispersion only depends on points near the largest "hole".5 In two dimensions a hexagonal tiling finds the optimal dispersion, but this is only valid when k is divisible by the number of columns and rows in the tiling.6 By construction, each individual dimension of the d-dimensional Sobol sequence has these same plateaus. where q i > 0 is the quality of y i, φ i ∈ R d is a featurized representation of y i, and K: DISPLAYFORM2 is a similarity kernel (e.g. cosine distance). (We will discuss how to featurize hyperparameter settings in Section 4.3.)Here, we fix all q i = 1; in future work, closed loop methods might make use of q i to encode evidence about the quality of particular hyperparameter settings to adapt the DPP's distribution over time. DPPs have support over all subsets of Y, including ∅ and Y itself. In many practical settings, one may have a fixed budget that allows running the training algorithm k times, so we require precisely k elements of Y for evaluation. k-DPPs are distributions over subsets of Y of size k. Thus, DISPLAYFORM0 Sampling from k-DPPs has been well-studied. When the base set B is a set of discrete items, exact sampling algorithms are known which run in O(N k 3) BID18. When the base set is a continuous hyperrectangle, a recent exact sampling algorithm was introduced, based on a connection with Gaussian processes (GPs), which runs in O(dk 2 + k 3) BID10. We are unaware of previous work which allows for sampling from k-DPPs defined over any other base sets. BID1 present a Metropolis-Hastings algorithm (included here as Algorithm 1) which is a simple and fast alternative to the exact sampling procedures described above. However, it is restricted to discrete domains. We propose a generalization of the MCMC algorithm which preserves relevant computations while allowing sampling from any base set from which we can draw uniform samples, including those with discrete dimensions, continuous dimensions, some continuous and some discrete dimensions, or even (conditional) tree structures (Algorithm 2). To the best of our knowledge, this is the first algorithm which allows for sampling from a k-DPP defined over any space other than strictly continuous or strictly discrete, and thus the first algorithm to utilize the expressive capabilities of the posterior variance of a GP in these regimes. set Y = Y ∪ {v} \ {u} with probability p: Y = Y 7: Return B Y Algorithm 1 proceeds as follows: First, initialize a set Y with k indices of L, drawn uniformly. Then, at each iteration, sample two indices of L (one within and one outside of the set Y), and with some probability replace the item in Y with the other. When we have continuous dimensions in the base set, however, we can't define the matrix L, so sampling indices from it is not possible. We propose Algorithm 2, which samples points directly from the base set B instead (assuming continuous dimensions are bounded), and computes only the principal minors of L needed for the relevant computations on the fly. Algorithm 2 Drawing a sample from a k-DPP defined over a space with continuous and discrete dimensions Input: A base set B with some continuous and some discrete dimensions, a quality function Ψ: compute the quality score for each item, q i = Ψ(β i), ∀i, and DISPLAYFORM0 DISPLAYFORM1 with probability p: β = β 10: Return β Even in the case where the dimensions of B are discrete, Algorithm 2 requires less computation and space than Algorithm 1 (assuming the quality and similarity scores are stored once computed, and retrieved when needed). Previous analyses claimed that Algorithm 1 should mix after O(N log(N)) steps. There are O(N 2) computations required to compute the full matrix L, and at each iteration we will compute at most O(k) new elements of L, so even in the worst case we will save space and computation whenever k log(N) < N. In expectation, we will save significantly more. Let φ i be a feature vector for y i ∈ Y, a modular encoding of the attribute-value mapping assigning values to different hyperparameters, in which fixed segments of the vector are assigned to each hyperparameter attribute (e.g., the dropout rate, the choice of nonlinearity, etc.). For a hyperparameter that takes a numerical value in range [h min, h max], we encode value h using one dimension (j) of φ and project into the range:, and hence label our approach k-DPP-RBF. Values for σ 2 lead to models with different properties; when σ 2 is small, points that are spread out interact little with one another, and when σ 2 is large, the increased repulsion between the points encourages them to be as far apart as possible. DISPLAYFORM0 Many real-world hyperparameter search spaces are tree-structured. For example, the number of layers in a neural network is a hyperparameter, and each additional layer adds at least one new hyperparameter which ought to be tuned (the number of nodes in that layer). For a binary hyperparameter like whether or not to use regularization, we use a one-hot encoding. When this hyperparameter is "on," we set the associated regularization strength as above, and when it is "off" we set it to zero. Intuitively, with all other hyperparameter settings equal, this causes the off-setting to be closest to the least strong regularization. One can also treat higher-level design decisions as hyperparameters BID17, such as whether to train a logistic regression classifier, a convolutional neural network, or a recurrent neural network. In this construction, the type of model would be a categorical variable (and thus get a one-hot encoding), and all child hyperparameters for an "off" model setting (such as the convergence tolerance for logistic regression, when training a recurrent neural network) would be set to zero. In this section we present our hyperparameter optimization experiments. Our experiments consider a setting where hyperparameters have a large effect on performance: a convolutional neural network for text classification BID16. The task is binary sentiment analysis on the Stanford sentiment treebank BID25. On this balanced dataset, random guessing leads to 50% accuracy. We use the CNN-non-static model from BID16, with skip-gram BID20 vectors. The model architecture consists of a convolutional layer, a max-over-time pooling layer, then a fully connected layer leading to a softmax. All k-DPP samples are drawn using Algorithm 2. We begin with a search over three continuous hyperparameters and one binary hyperparameter, with a simple tree structure: the binary hyperparameter indicates whether or not the model will use L 2 regularization, and one of the continuous hyperparameters is the regularization strength. We assume a budget of k = 20 evaluations by training the convolutional neural net. L 2 regularization strengths in the range [e −5, e −1] (or no regularization) and dropout rates in [0.0, 0.7] are considered. We consider three increasingly "easy" ranges for the learning rate:• Hard: [e −5, e 5], where the majority of the range leads to accuracy no better than chance.• Medium: [e −5, e −1], where half of the range leads to accuracy no better than chance.• Easy: [e −10, e −3], where the entire range leads to models that beat chance. FIG7 shows the accuracy (averaged over 50 runs) of the best model found after exploring 1, 2,..., k hyperparameter settings. We see that k-DPP-RBF finds better models with fewer iterations necessary than the other approaches, especially in the most difficult case. FIG7 compares the sampling methods against a Bayesian optimization technique using a tree-structured Parzen estimator (BO-TPE; BID5 . This technique evaluates points sequentially, allowing the model to choose the next point based on how well previous points performed (a closed loop approach). It is state-of-the-art on tree-structured search spaces (though its sequential nature limits parallelization). Surprisingly, we find it performs the worst, even though it takes advantage of additional information. We hypothesize that the exploration/exploitation tradeoff in BO-TPE causes it to commit to more local search before exploring the space fully, thus not finding hard-to-reach global optima. Note that when considering points sampled uniformly or from a DPP, the order of the k hyperparameter settings in one trial is arbitrary (though this is not the case with BO-TPE as it is an iterative algorithm). In all cases the variance of the best of the k points is lower than when sampled uniformly, and the differences in the plots are all significant with p < 0.01. BID28 analyzed the stability of convolutional neural networks for sentence classification with respect to a large set of hyperparameters, and found a set of six which they claimed had the largest impact: the number of kernels, the difference in size between the kernels, the size of each kernel, dropout, regularization strength, and the number of filters. We optimized over their prescribed "Stable" ranges for three open loop methods and one closed loop method; average accuracies with 95 percent confidence intervals from 50 trials of hyperparameter optimization are shown in Figure 3, across k = 5, 10, 15, 20 iterations. We find that even when optimizing over a space for which all values lead to good models, k-DPP-RBF outperforms the other methods. Our experiments reveal that, while the hyperparameters proposed by BID28, can have an effect, the learning rate, which they do not analyze, is at least as impactful. Here we compare our approach against Spearmint BID23, perhaps the most popular Bayesian optimization package. Figure 4 shows wall clock time and accuracy for 25 runs on the "Stable" search space of four hyperparameter optimization approaches: k-DPP-RBF (with k = 20), batch Spearmint with 2 iterations of batch size 10, batch Spearmint with 10 iterations of batch size 2, and sequential Spearmint 7. Each point in the plot is one hyperparameter assignment evaluation. The vertical lines represent how long, on average, it takes to find the best in one run. We see that all evaluations for k-DPP-RBF finish quickly, while even the fastest batch method (2 batches of size 10) takes nearly twice as long on average to find a good . The final average best-found accuracies are 82.61 for k-DPP-RBF, 82.65 for Spearmint with 2 batches of size 10, 82.7 for Spearmint with 10 batches of size 2, and 82.76 for sequential Spearmint. Thus, we find it takes on average more than ten times as long for sequential Spearmint to find its best solution, for a gain of only 0.15 percent accuracy. We have explored open loop hyperparameter optimization built on sampling from a k-DPP. We described how to define a k-DPP over hyperparameter search spaces, and showed that k-DPPs retain the attractive parallelization capabilities of random search. In synthetic experiments, we showed k-DPP samples perform well on a number of important metrics, even for large values of k. In hyperprameter optimization experiments, we see k-DPP-RBF outperform other open loop methods. Additionally, we see that sequential methods, even when using more than ten times as much wall clock time, gain less than 0.16 percent accuracy on a particular hyperparameter optimization problem. An open-source implementation of our method is available. A STAR DISCREPANCY DISPLAYFORM0 It is well-known that a sequence chosen uniformly at random from d has an expected star discrepancy of at least 1 k (and is no greater than DISPLAYFORM1) BID22 whereas sequences are known to exist with star discrepancy less than BID24, where both bounds depend on absolute constants. DISPLAYFORM2 Comparing the star discrepancy of sampling uniformly and Sobol, the bounds suggest that as d grows large relative to k, Sobol starts to suffer. Indeed, BID2 notes that the Sobol rate is not even valid until k = Ω(2 d) which motivates them to study a formulation of a DPP that has a star discrepancy between Sobol and random and holds for all k, small and large. They primarily approached this problem from a theoretical perspective, and didn't include experimental . Their work, in part, motivates us to look at DPPs as a solution for hyperparameter optimization. B KOKSMA-HLAWKA INEQUALITY Let B be the d-dimensional unit cube, and let f have bounded Hardy and Krause variation V ar HK (f) on B. Let x = (x 1, x 2, . . ., x k) be a set of points in B at which the function f will be evaluated to approximate an integral. The Koksma-Hlawka inequality bounds the numerical integration error by the product of the star discrepancy and the variation: DISPLAYFORM3 We can see that for a given f, finding x with low star discrepancy can improve numerical integration approximations. Find a (bounded) voronoi diagram over the search space for a point set X k. For each vertex in the voronoi diagram, find the closest point in X k. The dispersion is the max over these distances. One natural surrogate of average optimization performance is to define a hyperparameter space on d and measure the distance from a fixed point, say is motivated by a quadratic Taylor series approximation around the minimum of the hypothetical function we wish to minimize. In the first columns of Figure 5 we plot the smallest distance from the center 1 2 1, as a function of the length of the sequence (in one dimension) for the Sobol sequence, uniform at random, and a DPP. We observe all methods appear comparable when it comes to distance to the center. of what low discrepancy sequences attempt to do. While Sobol and uniformly random sequences will not bias themselves towards the corners, a DPP does. This happens because points from a DPP are sampled according to how distant they are from the existing points; this tends to favor points in the corners. This same behavior of sampling in the corners is also very common for Bayesian optimization schemes, which is not surprise due to the known connections between sampling from a DPP and Gaussian processes (see Section 2.2). In the second column of Figure 5 we plot the distance to the origin which is just an arbitrarily chosen corner of hypercube. As expected, we observe that the DPP tends to outperform uniform at random and Sobol in this metric. Figure 5: Comparison of the Sobol sequence, samples a from k-DPP, and uniform random for two metrics of interest. These log-log plots show uniform sampling and k-DPP-RBF performs comparably to the Sobol sequence in terms of distance to the center, but on another (distance to the origin) k-DPP-RBF samples outperform the Sobol sequence and uniform sampling.
We address fully parallel hyperparameter optimization with Determinantal Point Processes.
954
scitldr
Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into a much smaller graph by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom increases the classification accuracy and significantly reduces the run time compared to state-of-the-art unsupervised embedding methods. Recent years have seen a surge of interest in graph embedding, which aims to encode nodes, edges, or (sub)graphs into low dimensional vectors that maximally preserve graph structural information. Graph embedding techniques have shown promising for various applications such as vertex classification, link prediction, and community detection ; ; . However, current graph embedding methods have several drawbacks. On the one hand, random-walk based embedding algorithms, such as DeepWalk and node2vec , attempt to embed a graph based on its topology without incorporating node attribute information, which limits their embedding power. Later, graph convolutional networks (GCN) are developed with the basic notion that node embeddings should be smooth over the graph . While GCN leverages both topology and node attribute information for simplified graph convolution in each layer, it may suffer from high frequency noise in the initial node features, which compromises the embedding quality . On the other hand, few embedding algorithms can scale well to large graphs with millions of nodes due to their high computation and storage cost (a). For example, graph neural networks (GNNs) such as GraphSAGE collectively aggregate feature information from the neighborhood. When stacking multiple GNN layers, the final embedding vector of a node involves the computation of a large number of intermediate embeddings from its neighbors. This will not only drastically increase the number of computations among nodes but also lead to high memory usage for storing the intermediate . In literature, increasing the accuracy and improving the scalability of graph embedding methods are largely viewed as two orthogonal problems. Hence most research efforts are devoted to addressing only one of the problems. For instance, and proposed multi-level methods to obtain high-quality embeddings by training unsupervised models at every level; but their techniques do not improve scalability due to the additional training overhead. developed a heuristic algorithm to coarsen the graph by merging nodes with similar local structures. They use GCN to refine the embedding on the coarsened graphs, which not only is timeconsuming to train but may also degrade accuracy when multiple GCN layers are stacked together. More recently, proposed a similar strategy to coarsen the graph, where certain properties of the graph structure are preserved. However, this work lacks proper refinement methods to improve the embedding quality. In this paper we propose GraphZoom, a multi-level spectral approach to enhancing the quality and scalability of unsupervised graph embedding methods. Specifically, GraphZoom consists of four kernels: graph fusion, spectral graph coarsening, graph embedding, and embedding refinement. More concretely, graph fusion first converts the node feature matrix into a feature graph and then fuses it with the original topology graph. The fused graph provides richer information to the ensuing graph embedding step to achieve a higher accuracy. Spectral graph coarsening produces a series of successively coarsened graphs by merging nodes based on their spectral similarities. We show that our coarsening algorithm can efficiently and effectively retain the first few eigenvectors of the graph Laplacian matrix, which is critical for preserving the key graph structures. During the graph embedding step, any of the existing unsupervised graph embedding techniques can be applied to obtain node embeddings for the graph at the coarsest level. 1 Embedding refinement is then employed to refine the embeddings back to the original graph by applying a proper graph filter to ensure embeddings are smoothed over the graph. We validate the proposed GraphZoom framework on three transductive benchmarks: Cora, Citeseer and Pubmed citation networks as well as two inductive dataset: PPI and Reddit for vertex classification task. We further test on friendster dataset which contains 8 million nodes and 400 million edges to show the scalability of GraphZoom. Our experiments show that GraphZoom can improve the classification accuracy over all baseline embedding methods for both transductive and inductive tasks. Our main technical contributions are summarized as follows: • GraphZoom generates high-quality embeddings. We propose novel algorithms to encode graph structures and node attribute information in a fused graph and exploit graph filtering during refinement to remove high frequency noise. This in an increase of the embedding accuracy over the prior arts by up to 19.4%. • GraphZoom improves scalability. Our approach can significantly reduce the embedding run time by effectively coarsening the graph without losing the key spectral properties. Experiments show that GraphZoom can accelerate the entire embedding process by up to 40.8x while producing a similar or better accuracy than state-of-the-art techniques. • GraphZoom is highly composable. Our framework is agnostic to underlying graph embedding techniques. Any of the existing unsupervised embedding methods, either transductive or inductive, can be incorporated by GraphZoom in a plug-and-play manner. GraphZoom draws inspiration from multi-level graph embedding and graph filtering to boost the performance and speed of unsupervised embedding methods. Multi-level graph embedding. Multi-level graph embedding attempts to coarsen the graph in a series of levels where graph embedding techniques can be applied on those coarsened graphs with decreasing size. coarsen the graph into several levels and then perform embedding on the hierarchy of graphs from the coarsest to the original one. adopt a similar idea by hierarchical sampling original graph into multi-level graphs whose embedding vectors are concatenated to obtain the final node embeddings of the original graph. Both of these works only focus on improving embedding quality without improving the scalability. Later, Zhang et al. (2018b); attempt to improve graph embedding scalability by only embedding on the coarsest graph. However, their approaches lack proper refinement methods to generate high-quality embeddings of the original graph. propose MILE, which only trains the coarsest graph to obtain coarse embeddings, and leverages GCN as embeddings refinement method to improve embedding quality. Nevertheless, MILE requires to train a GCN model which is very time consuming for large graphs and cannot support inductive embedding models due to the transductive property of GCN. In contrast to the prior multi-level graph embedding techniques, GraphZoom is a simple yet theoretically motivated spectral approach to improving embedding quality as well as scalability of unsupervised graph embedding models. Graph filtering. Graph filters are direct analogs of classical filters in signal processing field, but intended for signals defined on graphs. defined graph filters in both vertex and spectral domains, and applies graph filter in image denoising and reconstruction tasks. showed the fundamental link between graph embedding and filtering by proving that GCN model implicitly exploits graph filter to remove high frequency noise from the node feature matrix; a filter neural network (gfNN) is then proposed to derive a stronger graph filter to improve the embedding . further derived two generalized graph filters and apply them on graph embedding models to improve their embedding quality for various classification tasks. 3 GRAPHZOOM FRAMEWORK Figure 1 shows the proposed GraphZoom framework which consists of four key phases: Phase is graph fusion, which constructs a weighted graph that fuses the information of both the graph topology and node attributes; In Phase, a spectral graph coarsening process is applied to form a hierarchy of coarsened fused graphs with decreasing size; In Phase, any of the prior graph embedding methods can be applied to the fused graph at the coarsest level; In Phase, the embedding vectors obtained at the coarsest level are mapped onto a finer graph using the mapping operators determined during the coarsening phase. This is followed by a refinement (smoothing) procedure; by iteratively applying Phase to increasingly finer graphs, the embedding vectors for the original graph can be eventually obtained. In the rest of this section, we describe each of these four phases in more detail. Graph w/ Node Embeddings Figure 1: Overview of the GraphZoom framework. Graph fusion aims to construct a weighted graph that has the same number of nodes as the original graph but potentially different set of edges (weights) that encapsulate the original graph topology as well as node attribute information. Specifically, given an undirected graph G = (V, E) with N nodes, its adjacency matrix A topo ∈ R N ×N and its node attribute (feature) matrix X ∈ R N ×K, where K corresponds to the dimension of node attribute vector, graph fusion can be interpreted as a function f (·) that outputs a weighted graph G f usion = (V, E f usion) represented by its adjacency matrix A f usion ∈ R N ×N, namely, A f usion = f (A topo, X). Graph fusion first converts the initial attribute matrix X into a weighted node attribute graph G f eat = (V, E f eat) by generating a k-nearest-neighbor (kNN) graph based on the l 2 -norm distance between the attribute vectors of each node pair. Note that a straightforward implementation requires comparing all possible node pairs and then selecting top-k nearest neighbors. However, such a naïve approach has a worst-case time complexity of O(N 2), which certainly does not scale to large graphs. To allow constructing the attribute graph in linear time, we leverage our O(|E|) complexity spectral graph coarsening scheme described with details in Section 3.2. More specifically, our approach starts with coarsening the original graph G to obtain a substantially reduced graph that has much fewer nodes. Note that such a procedure is very similar to spectral graph clustering, which aims to group nodes into clusters of high conductance . Once such node clusters are formed through spectral coarsening, selecting the top-k nearest neighbors within each cluster can be accomplished in O(M 2), where M is the averaged node count within the same cluster. Since we have roughly N/M clusters, the total run time for constructing the approximate kNN graph becomes O(M N). When a proper coarsening ratio (M N) is chosen, say M = 50, the overall run time complexity will become almost linear. For each edge in the attribute graph, we assign its weight w i,j according to the cosine similarity of two nodes' attribute vectors: w i,j = (X i,: · X j,:)/(X i,: X j,:), where X i,: and X j,: are the attribute vectors of node i and j. Finally, we can construct the fused graph by combining the topological graph and the attribute graph: A f usion = A topo + βA f eat, where β allows us to balance the graph topological and node attribute information in the fusion process. The fused graph will enable the underlying graph embedding model to utilize both graph topological and node attribute information, and thus can be fed into any downstream graph embedding procedures to further improve embedding quality. Graph coarsening via global spectral embedding. To reduce the size of the original graph while preserving important spectral properties (e.g., the first few eigenvalues and eigenvectors of the graph Laplacian matrix 2), a straightforward way is to first embed the graph into a k-dimensional space using the first k eigenvectors of the graph Laplacian matrix, which is also known as the spectral graph embedding technique . Next, the graph nodes that are close to each other in the low-dimensional embedding space can be aggregated to form the coarse-level nodes and subsequently the reduced graph. However, it will be very costly to calculate the eigenvectors of the original graph Laplacian, especially for very large graphs. Graph coarsening via local spectral embedding. In this work, we leverage an efficient yet effective local spectral embedding scheme to identify node clusters based on emerging graph signal processing techniques . There are obvious analogies between the traditional signal processing (Fourier analysis) and graph signal processing: The signals at different time points in classical Fourier analysis correspond to the signals at different nodes in an undirected graph; The more slowly oscillating functions in time domain correspond to the graph Laplacian eigenvectors associated with lower eigenvalues or the more slowly varying (smoother) components across the graph. Instead of directly using the first few eigenvectors of the original graph Laplacian, we apply the simple smoothing (low-pass graph filtering) function to k random vectors to obtain smoothed vectors for k-dimensional graph embedding, which can be achieved in linear time. Consider a random vector (graph signal) x that can be expressed with a linear combination of eigenvectors u of the graph Laplacian. Low-pass graph filters can be adopted to quickly filter out the "high-frequency" components of the random graph signal or the eigenvectors corresponding to high eigenvalues of the graph Laplacian. By applying the smoothing function on x, a smoothed vectorx can be obtained, which can be considered as a linear combination of the first few eigenvectors: More specifically, we apply a few (e.g. five to ten) Gauss-Seidel iterations for solving the linear system of equations L G x (i) = 0 to a set of t initial random vectors T = (x,..., x (t) ) that are orthogonal to the all-one vector 1 satisfying 1 x (i) = 0, and L G is the Laplacian matrix of graph G or G f usion. Based on the smoothed vectors in T, each node is embedded into a t-dimensional space such that nodes p and q are considered spectrally similar if their low-dimensional embedding vectors x p ∈ R t and x q ∈ R t are highly correlated. Here the node distance is measured by the spectral node affinity a p,q for neighboring nodes p and q : Once the node aggregation schemes are determined, the graph mapping operators on each level (can be obtained and leveraged for constructing a series of spectrally-reduced We emphasize that the aggregation scheme based on the above spectral node affinity calculations will have a (linear) complexity of O(|E f usion |) and thus allow preserving the spectral (global or structural) properties of the original graph in a highly efficient and effective way. As suggested in , a spectral sparsification procedure can be applied to effectively control densities of coarse level graphs. In this work, a similarity-aware spectral sparsification tool "GRASS" has been adopted for achieving a desired graph sparsity at the coarsest level. Embedding the Coarsest Graph. Once the coarsest graph G m is constructed, node embeddings E m on G m can be obtained by E m = l(G m), where l(·) can be any unsupervised embedding methods. Once the base node embedding are available, we can easily project the node embeddings from graph G i+1 to the fine-grained graph G i with the corresponding projection operator H i i+1: Due to the property of the projection operator, embedding of the node in coarse-grained graph will be directly copied to the nodes of the same aggregation set in the fine-grained graph. In this case, spectrally-similar nodes in the fine-grained graph will have the same embedding if they are aggregated into a single node during the coarsening phase. To further improve the quality of the mapped embeddings, we apply a local refinement process motivated by Tikhonov regularization to smooth the node embeddings over the graph by minimizing the following objective: min where L i and E i are the normalized Laplacian matrix and mapped embedding matrix of the graph at the i-th coarsening level, respectively. The refined embedding matrixẼ i is obtained by solving Eq., whose first term enforces the refined embeddings to agree with mapped embeddings while the second term employs Laplacian smoothing to smoothẼ i over the graph. By taking the derivative of the objective function in Eq. and setting it to zero, we have: where I is the identity matrix. However, obtaining refined embeddings in this way is very time consuming since it involves matrix inversion whose time complexity is O(N 3). Instead, we exploit a more efficient graph filter to smooth the embeddings. Let the term (I + L) −1 denoted by h(L), then its corresponding graph filter in spectral domain is h(λ) = (1 + λ) −1. To avoid the inversion term, we approximate h(λ) by its first-order Taylor expansion, namely,h(λ) = 1 − λ. We then generalizẽ k, where k controls the power of graph filter. After transformingh where A is the adjacency matrix and D is the degree matrix. It can be proved that adding a proper self-loop for every node in the graph can enableh k (L) to more effectively filter out high-frequency noise components (more details are available in Appendix G). Thus, we modify the adjacency matrix asà = A + σI, where σ is a small value to ensure every node has its own self-loop. Finally, the low-pass graph filter can be utilized to smooth the mapped embedding matrix, as shown in. We iteratively apply Eq. to obtain the embeddings of the original graph (i.e., E 0). Note that our refinement stage does not involve training and can be simply considered as several (sparse) matrix multiplications, which can be computed efficiently. We have performed comparative evaluation of GraphZoom framework against several existing stateof-the-art unsupervised graph embedding techniques and multi-level embedding frameworks on five standard graph-based dataset (transductive as well as inductive). In addition, we evaluate the scalability of GraphZoom on Friendster dataset that contains 8 million nodes and 400 million edges. Finally, we further analyze GraphZoom kernels separately to show their effectiveness. Datasets. The statistics of datasets used in our experiments are demonstrated in Table 1. We use Cora, Citeseer, Pubmed, Friendster for transductive task and PPI, Reddit for inductive task. We choose the same training and testing size used in;. Transductive baseline models. Many existing graph embedding techniques are essentially transductive learning methods, which require all nodes in the graph be present during training, and their embedding models have to be retrained whenever a new node is added. We compare GraphZoom with transductive models DeepWalk, node2vec, and Deep Graph Infomax (DGI) (Velikovi et al., and MILE , which have shown improvement upon DeepWalk and node2vec in either embedding quality or scalability. Inductive baseline models. Inductive graph embedding models can be trained without seeing the whole graph structure and their trained models can be applied on new nodes added to graph. To show GraphZoom can also enhance inductive learning, we compare it against GraphSAGE using four different aggregation functions. More details of datasets and baselines are available in Appendix A and B. We optimize hyperparameters of DeepWalk, node2vec, DGI and GraphSAGE on original datasets as embedding baseline, and then we choose the same hyper-parameters to embed coarsened graph in HARP, MILE and our GraphZoom framework. We run all the experiments on a machine running Linux with an Intel Xeon Gold 6242 CPU (32 cores, 2.40GHz) and 384 GB of RAM. Tables 2 and 3, respectively. We report the mean classification accuracy for transductive task and micro-averaged F1 score for inductive task as well as CPU time after 10 runs for all the baselines and GraphZoom. We measure the CPU time for graph embedding as the total run time of DeepWalk, node2vec, DGI, and GraphSAGE. We use the sum of CPU time for graph coarsening, graph embedding, and embedding refinement as total run time of HARP and MILE. Similarly, we sum up the CPU time for graph fusion, graph coarsening, graph embedding, and embedding refinement as total run time of GraphZoom. We also perform fine-tuning on the hyper-parameters. For both DeepWalk and node2vec, we use 10 walks with a walk length of 80, a window size of 10, and an embedding dimension of 128; we further set p = 1 and q = 0.5 in node2vec. For DGI, we choose early stopping strategy with a learning rate of 0.001, an embedding dimension of 512. For GraphSAGE, we train a two-layer model for one epoch, with a learning rate of 0.00001, an embedding dimension of 128, and a batch size of 256. Comparing GraphZoom with baseline embedding methods. We show the of GraphZoom with coarsening level varying 1 to 3 for transductive learning and 1 to 2 for inductive learning. Results with larger coarsening level are available in Figure 3 (blue curve) and the Appendix I. Our demonstrate that GraphZoom is agnostic to underlying embedding methods and capable of boosting the accuracy and speed of state-of-the-art unsupervised embedding methods on various datasets. More specifically, for transductive learning task, GraphZoom improves classification accuracy upon both DeepWalk and node2vec by a margin of 8.3%, 19.4%, and 10.4% on Cora, Citeseer, and Pubmed, respectively, while achieving up to 40.8x run-time reduction. In regards to comparing with DGI, GraphZoom achieves comparable or better accuracy with speedup up to 11.2×. Similarly, GraphZoom outperforms all the baselines by a margin of 3.4% and 3.3% on PPI and Reddit for inductive learning task, respectively, with speedup up to 7.6x. Our indicate that reducing graph size while properly retaining the key spectral properties of graph Laplacian and smoothing embeddings will not only boost the embedding speed but also lead to high embedding quality. Comparing GraphZoom with multi-level frameworks. As shown in Table 2, HARP only slightly improves and sometimes even worsens the classification accuracy while significantly increasing the CPU time. Although MILE improves both accuracy and CPU time compared to baseline embedding methods in some cases, the performance of MILE becomes worse with increasing coarsening levels (e.g., the classification accuracy of MILE drops from 0.708 to 0.618 on Pubmed dataset with node2vec as the embedding kernel). GraphZoom achieves a better accuracy and speedup compared to MILE with the same coarsening level across all datasets. Moreover, when increasing coarsening levels, namely, decreasing number of nodes on the coarsened graph, GraphZoom still produces comparable or even a better embedding accuracy with much shorter CPU times. This further confirms GraphZoom can retain the key graph structure information to be utilized by underlying embedding models to generate high-quality node embeddings. More of GraphZoom on non-attributed graph for both node classification and link prediction tasks are available in Appendix J. GraphZoom for large graph embedding. To show GraphZoom can significantly improve performance and scalability of underlying embedding model on large graph, we test GraphZoom and MILE on Friendster dataset, which contains 8 million nodes and 400 million edges, using DeepWalk as the embedding kernel. As shown in Figure 2, GraphZoom drastically boosts the Micro-F1 score up to 47.6% compared to MILE and 49.9% compared to DeepWalk with speedup up to 119.8×. When increasing coarsening level, GraphZoom achieves a higher speedup while the embedding accuracy decreases gracefully, which shows the key strength of GraphZoom: it can effectively coarsen large graph by merging many redundant nodes that are spectrally similar, which preserves the most important graph spectral (structural) properties key to underlying embedding model. When applying basic embedding model on coarsest graph, it can learn more global information from spectral domain, leading to high-quality node embeddings. On the contrary, heuristic graph coarsening algorithm used in MILE fails to preserve a meaningful coarsest graph, especially when coarsening graph by a large reduction ratio. Comparisons of different kernel combinations in GraphZoom and MILE in classification accuracy on Cora, Citeseer, and Pubmed datasets -We choose DeepWalk (DW) as the embedding kernel. GZoom F, GZoom C, GZoom R denote the fusion, coarsening, and refinement kernels proposed in GraphZoom, respectively; MILE C and MILE R denote the coarsening and refinement kernels in MILE, respectively; The blue curve is basically GraphZoom and the yellow one is MILE. To study the effectiveness of our proposed GraphZoom kernels separately, we compare each of them against the corresponding kernel in MILE with other kernels fixed. As shown in Figure 3, when fixing coarsening kernel and comparing refinement kernel of GraphZoom with that of MILE (shown in purple curve and yellow curve), GraphZoom refinement kernel can improve embedding upon MILE refinement kernel, especially when the coarsening level is large, which indicates that our proposed graph filter in refinement kernel can successfully filter out high frequency noise from graph to improve embedding quality. Similarly, when comparing coarsening kernels in GraphZoom and MILE with refinement kernel fixed (shown in light blue curve and yellow curve), GraphZoom coarsening kernel can also improve embedding quality upon MILE coarsening kernel, which shows that our spectral graph coarsening algorithm can indeed retain key graph structure for underlying graph embedding models to exploit. When combining GraphZoom coarsening kernel and refinement kernel (green curve), we can achieve better classification accuracy compared with the ones using any kernel in MILE (i.e., light blue curve, purple curve and yellow curve), which means that GraphZoom coarsening kernel and refinement kernel play different roles to boost embedding performance and their combination can further improve embedding . Moreover, when adding graph fusion kernel with the combination of GraphZoom coarsening and refinement kernels (blue curve, which is our GraphZoom framework), it improves classification accuracy by a large margin, which betokens that graph fusion can properly incorporate both graph topology and node attribute information and lifts the embedding quality of downstream embedding models. Results of each kernel CPU time and speedup comparison are available in Appendix F and Appendix H. In this work we propose GraphZoom, a multi-level framework to improve embedding quality and scalability of underlying unsupervised graph embedding techniques. GraphZoom encodes graph structure and node attribute in a single graph and exploiting spectral coarsening and refinement methods to remove high frequency noise from the graph. Experiments show that GraphZoom improves both classification accuracy and embedding speed on a number of popular datasets. An interesting direction for future work is to derive a proper way to propagate node labels to the coarsest graph, which would allow GraphZoom to support supervised graph embedding models. Transductive task. We follow the experiments setup in for three standard citation network benchmark datasets: Cora, Citeseer, and Pubmed. In all these three citation networks, nodes represent documents and edges correspond to citations. Each node has a sparse bag-of-word feature vector and a class label. We allow only 20 labels per class for training and 1, 000 labeled nodes for testing. In addition, we further evaluate on Friendster dataset , which contains 8 million nodes and 400 million edges, with 2.5% of the nodes used for training and 0.3% nodes for testing. In Friendster, nodes represent users and a pair of nodes are linked if they are friends; each node has a class label but is not associated with a feature vector. Inductive task. We follow for setting up experiments on both proteinprotein interaction (PPI) and Reddit dataset. PPI dataset consists of graphs corresponding to human tissues, where nodes are proteins and edges represent interaction effects between proteins. Reddit dataset contains nodes corresponding to users' posts: two nodes are connected through an edge if the same users comment on both posts. We use 60% nodes for training, 40% for testing on PPI and 65% for training and 35% for testing on Reddit. DeepWalk first generates random walks based on graph structure. Then, walks are treated as sentences in a language model and Skip-Gram model is exploited to obtain node embeddings. node2vec is different from DeepWalk in terms of generating random walks by introducing the return parameter p and the in-out parameter q, which can combine DFS-like and BFS-like neighborhood exploration. Deep Graph Infomax (DGI) is an unsupervised approach that generates node embeddings by maximizing mutual information between patch representations (local information) and corresponding high-level summaries (global information) of graphs. GraphSAGE embeds nodes in an inductive way by learning an aggregation function that aggregates node features to obtain embeddings. GraphSAGE supports four different aggregation functions: GraphSAGE-GCN, GraphSAGE-mean, GraphSAGE-LSTM and GraphSAGE-pool. HARP coarsens the original graph into several levels and apply underlying embedding model to train the coarsened graph at each level sequentially to obtain the final embeddings on original graph. Since the coarsening level is fixed in their implementation, we run HARP in our experiments without changing the coarsening level. MILE is the state-of-the-art multi-level unsupervised graph embedding framework and similar to our GraphZoom framework since it also contains graph coarsening and embedding refinement kernels. More specifically, MILE first uses its heuristic-based coarsening kernel to reduce the graph size and trains underlying unsupervised graph embedding model on coarsest graph. Then, its refinement kernel employs Graph Convolutional Network (GCN) to refine embeddings on the original graph. We compare GraphZoom with MILE on various datasets, including Friendster that contains 8 million nodes and 400 million edges (shown in Table 2 and Figure 2). Moreover, we further compare each kernel in GraphZoom and MILE in Figure 3. The details of graph size at different coarsening level on all six datasets are shown in Table 4. APPENDIX E SPECTRAL COARSENING Note that the mapping operator H i+1 i ∈ {0, 1} |Vi+1|×|Vi| is a matrix containing only 0s and 1s. It has following properties: • The row (column) index of H i+1 i corresponds to the node index in graph G i+1 (G i). • It is a surjective mapping of the node set, where (H i+1 i) p,q = 1 if node q in graph G i is aggregated to super-node p in graph G i+1, and (H i+1 i) p,q = 0 for all nodes p ∈ {v ∈ V i+1 : v = p}. • It is a locality-preserving operator, where the coarsened version of G i induced by the nonzero entries of (H i+1 i) p,: is connected for each p ∈ V i+1. Algorithm 2: spectral coarsening algorithm Input: Adjacency matrix A i ∈ R |Vi|×|Vi| Output: Adjacency matrix A i+1 ∈ R |Vi+1|×|Vi+1| of the reduced graph G i+1, mapping operator H i+1 i ∈ R |Vi+1|×|Vi| 11 n = |V i |, n c = n; 12 [graph reduction ratio] γ max = 1.8, δ = 0.9; 13 for each edge (p, q) ∈ E i do 14 [spectral node affinity set] C ← a p,q defined in Eq. 2; 15 end 16 for each node p ∈ V i do 17 Figure 4 (note that the y axis is in logarithmic scale), the GraphZoom embedding kernel dominates the total CPU time, which can be more effectively reduced with a greater coarsening level L. All other kernels in GraphZoom are very efficient, which enable the GraphZoom framework to drastically reduce the total graph embedding time. Figure 5a shows the original distribution of graph Laplacian eigenvalues which also can be interpreted as frequencies in graph spectral domain (smaller eigenvalue means lower frequency). The proposed graph filter for embedding refinement (as shown in Figure 5e) can be considered as a bandstop filter that passes all frequencies with the exception of those within the middle stop band that is greatly attenuated. Therefore, the band-stop filter may not be very effective for removing highfrequency noises from the graph signals. Fortunately, it has been shown that by adding self-loops to each node in the graph as followsà = A+σI (shown in Figure 5b, 5c, 5d, where σ = 0.5, 1.0, 2.0), the distribution of Laplacian eigenvalues can be squeezed to the left (towards zero) . By properly choosing σ such that large eigenvalues will mostly lie in the stop band (e.g., σ = 1.0, 2.0 shown in Figure 5c and 5d), the graph filter will be able to effectively filtered out high-frequency components (corresponding to high eigenvalues) while retaining low-frequency components, which is similar to a low-pass graph filter as shown in Figure 5f. It is worth noting that if σ is too large, then most eigenvalues will be very close to zero, which makes the graph filter less effective for removing noises. In this work, we choose σ = 2.0 for all our experiments. As shown in Figure 6, the combination of GraphZoom coarsening and refinement kernels can always achieve the greatest speedups (green curves); adding GraphZoom fusion kernel (blue curves) will lower the speedups by a small margin but further boost the embedding quality, showing a clear tradeoff between embedding quality and runtime efficiency: to achieve the highest graph embedding quality, the graph fusion kernel should be included. To further show that GraphZoom can work on non-attributed datasets, we evaluate it on PPI(Homo Sapiens) and Wiki datasets, following the same dataset configuration as used in;. As shown in Table 5, GraphZoom (without fusion kernel) improves
A multi-level spectral approach to improving the quality and scalability of unsupervised graph embedding.
955
scitldr
Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches. Recent advances in deep representation learning have ed in powerful probabilistic generative models which have demonstrated their ability on modeling continuous data, e.g., time series signals BID20 BID6 and images BID22 BID15. Despite the success in these domains, it is still challenging to correctly generate discrete structured data, such as graphs, molecules and computer programs. Since many of the structures have syntax and semantic formalisms, the generative models without explicit constraints often produces invalid ones. Conceptually an approach in generative model for structured data can be divided in two parts, one being the formalization of the structure generation and the other one being a (usually deep) generative model producing parameters for stochastic process in that formalization. Often the hope is that with the help of training samples and capacity of deep models, the loss function will prefer the valid patterns and encourage the mass of the distribution of the generative model towards the desired region automatically. Arguably the simplest structured data are sequences, whose generation with deep model has been well studied under the seq2seq BID25 framework that models the generation of sequence as a series of token choices parameterized by recurrent neural networks (RNNs). Its widespread success has encourage several pioneer works that consider the conversion of more complex structure data into sequences and apply sequence models to the represented sequences. BID11 (CVAE) is a representative work of such paradigm for the chemical molecule generation, using the SMILES line notation BID26 for representing molecules. Figure 1: Illustration on left shows the hierarchy of the structured data decoding space w.r.t different works and theoretical classification of corresponding strings from formal language theory. SD-VAE, our proposed model with attribute grammar reshapes the output space tighter to the meaningful target space than existing works. On the right we show a case where CFG is unable to capture the semantic constraints, since it successfully parses an invalid program. However, because of the lack of formalization of syntax and semantics serving as the restriction of the particular structured data, underfitted general-purpose string generative models will often lead to invalid outputs. Therefore, to obtain a reasonable model via such training procedure, we need to prepare large amount of valid combinations of the structures, which is time consuming or even not practical in domains like drug discovery. To tackle such a challenge, one approach is to incorporate the structure restrictions explicitly into the generative model. For the considerations of computational cost and model generality, contextfree grammars (CFG) have been taken into account in the decoder parametrization. For instance, in molecule generation tasks, BID18 proposes a grammar variational autoencoder (GVAE) in which the CFG of SMILES notation is incorporated into the decoder. The model generates the parse trees directly in a top-down direction, by repeatedly expanding any nonterminal with its production rules. Although the CFG provides a mechanism for generating syntactic valid objects, it is still incapable to regularize the model for generating semantic valid objects BID18. For example, in molecule generation, the semantic of the SMILES languages requires that the rings generated must be closed; in program generation, the referenced variable should be defined in advance and each variable can only be defined exactly once in each local context (illustrated in Fig 1b).All the examples require cross-serial like dependencies which are not enforceable by CFG, implying that more constraints beyond CFG are needed to achieve semantic valid production in VAE.In the theory of compiler, attribute grammars, or syntax-directed definition has been proposed for attaching semantics to a parse tree generated by context-free grammar. Thus one straightforward but not practical application of attribute grammars is, after generating a syntactic valid molecule candidate, to conduct offline semantic checking. This process needs to be repeated until a semantically valid one is discovered, which is at best computationally inefficient and at worst infeasible, due to extremely low rate of passing checking. As a remedy, we propose the syntax-direct variational autoencoder (SD-VAE), in which a semantic restriction component is advanced to the stage of syntax tree generator. This allows the generator with both syntactic and semantic validation. The proposed syntax-direct generative mechanism in the decoder further constraints the output space to ensure the semantic correctness in the tree generation process. The relationships between our proposed model and previous models can be characterized in Figure 1a.Our method brings theory of formal language into stochastic generative model. The contribution of our paper can be summarized as follows:• Syntax and semantics enforcement: We propose a new formalization of semantics that systematically converts the offline semantic check into online guidance for stochastic generation using the proposed stochastic lazy attribute. This allows us effectively address both syntax and semantic constraints.• Efficient learning and inference: Our approach has computational cost O(n) where n is the length of structured data. This is the same as existing methods like CVAE and GVAE which do not enforce semantics in generation. During inference, the SD-VAE runs with semantic guiding onthe-fly, while the existing alternatives generate many candidates for semantic checking.• Strong empirical performance: We demonstrate the effectiveness of the SD-VAE through applications in two domains, namely the subset of Python programs and molecules. Our approach consistently and significantly improves the in evaluations including generation, reconstruction and optimization. Before introducing our model and the learning algorithm, we first provide some knowledge which is important for understanding the proposed method. The variational autoencoder BID16 BID23 provides a framework for learning the probabilistic generative model as well as its posterior, respectively known as decoder and encoder. We denote the observation as x, which is the structured data in our case, and the latent variable as z. The decoder is modeling the probabilistic generative processes of x given the continuous representation z through the likelihood p θ (x|z) and the prior over the latent variables p(z), where θ denotes the parameters. The encoder approximates the posterior p θ (z|x) ∝ p θ (x|z)p(z) with a model q ψ (z|x) parametrized by ψ. The decoder and encoder are learned simultaneously by maximizing the evidence lower bound (ELBO) of the marginal likelihood, i.e., DISPLAYFORM0 where X denotes the training datasets containing the observations. Context free grammar A context free grammar (CFG) is defined as G = V, Σ, R, s, where symbols are divided into V, the set of non-terminal symbols, Σ, the set of terminal symbols and s ∈ V, the start symbol. Here R is the set of production rules. Each production rule r ∈ R is denoted as r = α → β for α ∈ V is a nonterminal symbol, and β = u 1 u 2... u |β| ∈ (V Σ) * is a sequence of terminal and/or nonterminal symbols. Attribute grammar To enrich the CFG with "semantic meaning", BID17 formalizes attribute grammar that introduces attributes and rules to CFG. An attribute is an attachment to the corresponding nonterminal symbol in CFG, written in the format v.a for v ∈ V. There can be two types of attributes assigned to non-terminals in G: the inherited attributes and the synthesized attributes. An inherited attribute depends on the attributes from its parent and siblings, while a synthesized attribute is computed based on the attributes of its children. Formally, for a production u 0 → u 1 u 2... u |β|, we denote I(u i) and S(u i) be the sets of inherited and synthesized attributes of u i for i ∈ {0, . . ., |β|}, respectively. We here exemplify how the above defined attribute grammar enriches CFG with non-context-free semantics. We use the following toy grammar, a subset of SMILES that generates either a chain or a cycle with three carbons: DISPLAYFORM0 where we show the production rules in CFG with → on the left, and the calculation of attributes in attribute grammar with ← on the left. Here we leverage the attribute grammar to check (with attribute matched) whether the ringbonds come in pairs: a ringbond generated at atom 1 should match the bond type and bond index that generated at atom 2, also the semantic constraint expressed by s.ok requires that there is no difference between the set attribute of atom 1 and atom 2. Such constraint in SMILES is known as cross-serial dependencies (CSD) BID4 which is non-context-free BID24. See Appendix A.3 for more explanations. FIG2 illustrates the process of performing syntax and semantics check in compilers. Here all the attributes are synthetic, i.e., calculated in a bottom-up direction. So generally, in the semantic correctness checking procedure, one need to perform bottom-up procedures for calculating the attributes after the parse tree is generated. However, in the top-down structure generating process, the parse tree is not ready for semantic checking, since the synthesized attributes of each node require information from its children nodes, which are not generated yet. Due to such dilemma, it is nontrivial to use the attribute grammar to guide the top-down generation of the tree-structured data. One straightforward way is using acceptance-rejection sampling scheme, i.e., using the decoder of CVAE or GVAE as a proposal and the semantic checking as the threshold. It is obvious that since the decoder does not include semantic guidance, the proposal distribution may raise semantically invalid candidate frequently, therefore, wasting the computational cost in vain.3 SYNTAX-DIRECTED VARIATIONAL AUTOENCODER As described in Section 2.2.1, directly using attribute grammar in an offline fashion (i.e., after the generation process finishes) is not efficient to address both syntax and semantics constraints. In this section we describe how to bring forward the attribute grammar online and incorporate it into VAE, such that our VAE addresses both syntactic and semantic constraints. We name our proposed method Syntax-Directed Variational Autoencoder (SD-VAE). By scrutinizing the tree generation, the major difficulty in incorporating the attributes grammar into the processes is the appearance of the synthesized attributes. For instance, when expanding the start symbol s, none of its children is generated yet. Thus their attributes are also absent at this time, making the s.matched unable to be computed. To enable the on-the-fly computation of the synthesized attributes for semantic validation during tree generation, besides the two types of attributes, we introduce the stochastic lazy attributes to enlarge the existing attribute grammar. Such stochasticity transforms the corresponding synthesized attribute into inherited constraints in generative procedure; and lazy linking mechanism sets the actual value of the attribute, once all the other dependent attributes are ready. We demonstrate how the decoder with stochastic lazy attributes will generate semantic valid output through the same pedagogical example as in Section 2.2.1. FIG3 visually demonstrates this process. The tree generation procedure is indeed sampling from the decoder p θ (x|z), which can be decomposed into several steps that elaborated below: Sample production rule r = (α → β) ∈ R ∼ p θ (r|ctx, node, T). The conditioned variables encodes the semantic constraints in tree generation. DISPLAYFORM0 DISPLAYFORM1 ctx ← RNN(ctx, r) update context vector 6: DISPLAYFORM0 ) node creation with parent and siblings' attributes 8:GenTree(v i, T) recursive generation of children nodes 9:Update synthetic and stochastic attributes of node with v i Lazy linking 10:end for 11: end procedure i) stochastic predetermination: in FIG3 (a), we start from the node s with the synthesized attributes s.matched determining the index and bond type of the ringbond that will be matched at node s. Since we know nothing about the children nodes right now, the only thing we can do is to'guess' a value. That is to say, we associate a stochastic attribute s.sa ∈ {0, 1} Ca ∼ Ca i=1 B(sa i |z) as a predetermination for the sake of the absence of synthesized attribute s.matched, where B(·) is the Bernoulli distribution. Here C a is the maximum cardinality possible 1 for the corresponding attribute a. In above example, the 0 indicates no ringbond and 1 indicates one ringbond at both atom 1 and atom 2, respectively.ii) constraints as inherited attributes: we pass the s.sa as inherited constraints to the children of node s, i.e., atom 1 and atom 2 to ensure the semantic validation in the tree generation. For example, FIG3 (b)'sa=1' is passed down to atom 1.iii) sampling under constraints: without loss of generality, we assume atom 1 is generated before atom 2. We then sample the rules from p θ (r| atom 1, s, z) for expanding atom 1, and so on and so forth to generate the subtree recursively. Since we carefully designed sampling distribution that is conditioning on the stochastic property, the inherited constraints will be eventually satisfied. In the example, due to the s.sa ='1', when expanding atom 1, the sampling distribution p θ (r| atom 1, s, z) only has positive mass on rule atom →'C' bond digit. iv) lazy linking: once we complete the generation of the subtree rooted at atom 1, the synthesized attribute atom 1.set is now available. According to the semantic rule for s.matched, we can instantiate s.matched = atom 1.set = {'-1'}. This linking is shown in FIG3 (d)(e). When expanding atom 2, the s.matched will be passed down as inherited attribute to regulate the generation of atom 2, as is demonstrated in FIG3 (f)(g).In summary, the general syntax tree T ∈ L(G) can be constructed step by step, within the languages L(G) covered by grammar G. In the beginning, T = root, where root. symbol = s which contains only the start symbol s. At step t, we will choose an nonterminal node in the frontier 2 of partially generated tree T (t) to expand. The generative process in each step t = 0, 1,... can be described as:1. Pick node v (t) ∈ F r(T (t) ) where its attributes needed are either satisfied, or are stochastic attributes that should be sampled first according to Bernoulli distribution B(·|v (t), T (t) ); DISPLAYFORM1, and DISPLAYFORM2, i.e., expand the nonterminal with production rules defined in CFG. DISPLAYFORM3, grow the tree by attaching β (t) to v (t). Now the node v (t) has children represented by symbols in β (t).The above process continues until all the nodes in the frontier of T (T) are all terminals after T steps. Then, we obtain the algorithm 1 for sampling both syntactic and semantic valid structures. In fact, in the model training phase, we need to compute the likelihood p θ (x|z) given x and z. The probability computation procedure is similar to the sampling procedure in the sense that both of them requires tree generation. The only difference is that in the likelihood computation procedure, the tree structure, i.e., the computing path, is fixed since x is given; While in the sampling procedure, it is sampled following the learned model. Specifically, the generative likelihood can be written as: DISPLAYFORM4 where ctx = z and ctx (t) = RNN(r t, ctx (t−1) ). Here RNN can be commonly used LSTM, etc.. As we introduced in section 2, the encoder, q ψ (z|x) approximates the posterior of the latent variable through the model with some parametrized function with parameters ψ. Since the structure in the observation x plays an important role, the encoder parametrization should take care of such information. The recently developed deep learning models BID9 BID19 provide powerful candidates as encoder. However, to demonstrate the benefits of the proposed syntax-directed decoder in incorporating the attribute grammar for semantic restrictions, we will exploit the same encoder in BID18 for a fair comparison later. We provide a brief introduction to the particular encoder model used in Kusner et al. FORMULA0 for a self-contained purpose. Given a program or a SMILES sequence, we obtain the corresponding parse tree using CFG and decompose it into a sequence of productions through a pre-order traversal on the tree. Then, we convert these productions into one-hot indicator vectors, in which each dimension corresponds to one production in the grammar. We will use a deep convolutional neural networks which maps this sequence of one-hot vectors to a continuous vector as the encoder. Our learning goal is to maximize the evidence lower bound in Eq 1. Given the encoder, we can then map the structure input into latent space z. The variational posterior q(z|x) is parameterized with Gaussian distribution, where the mean and variance are the output of corresponding neural networks. The prior of latent variable p(z) = N (0, I). Since both the prior and posterior are Gaussian, we use the closed form of KL-divergence that was proposed in BID16. In the decoding stage, our goal is to maximize p θ (x|z). Using the Equation FORMULA2, we can compute the corresponding conditional likelihood. During training, the syntax and semantics constraints required in Algorithm 1 can be precomputed. In practice, we observe no significant time penalty measured in wall clock time compared to previous works. Generative models with discrete structured data have raised increasing interests among researchers in different domains. The classical sequence to sequence model BID25 and its variations have also been applied to molecules BID11. Since the model is quite flexible, it is hard to generate valid structures with limited data, though Dave Janz FORMULA0 shows that an extra validator model could be helpful to some degree. Techniques including data augmentation BID2, active learning BID14 and reinforcement learning also been proposed to tackle this issue. However, according to the empirical evaluations from BID1, the validity is still not satisfactory. Even when the validity is enforced, the models tend to overfit to simple structures while neglect the diversity. Since the structured data often comes with formal grammars, it is very helpful to generate its parse tree derived from CFG, instead of generating sequence of tokens directly. The Grammar VAE BID18 introduced the CFG constrained decoder for simple math expression and SMILES string generation. The rules are used to mask out invalid syntax such that the generated sequence is always from the language defined by its CFG. BID21 uses a RecursiveReverse-Recursive Neural Network (R3NN) to capture global context information while expanding with CFG production rules. Although these works follow the syntax via CFG, the context sensitive information can only be captured using variants of sequence/tree RNNs (; BID8 BID27, which may not be time and sample efficient. In our work, we capture the semantics with proposed stochastic lazy attributes when generating structured outputs. By addressing the most common semantics to harness the deep networks, it can greatly reshape the output domain of decoder BID13 . As a , we can also get a better generative model for discrete structures. Code is available at https://github.com/Hanjun-Dai/sdvae.We show the effectiveness of our proposed SD-VAE with applications in two domains, namely programs and molecules. We compare our method with CVAE BID11 and GVAE BID18 . CVAE only takes character sequence information, while GVAE utilizes the context-free grammar. To make a fair comparison, we closely follow the experimental protocols that were set up in BID18 . The training details are included in Appendix B.Our method gets significantly better than previous works. It yields better reconstruction accuracy and prior validity by large margins, while also having comparative diversity of generated structures. More importantly, the SD-VAE finds better solution in program and molecule regression and optimization tasks. This demonstrates that the continuous latent space obtained by SD-VAE is also smoother and more discriminative. Here we first describe our datasets in detail. The programs are represented as a list of statements. Each statement is an atomic arithmetic operation on variables (labeled as v0, v1, · · ·, v9) and/or immediate numbers (1, 2, . . ., 9). Some examples are listed below: v9=v3-v8;v5=v0 * v9;return:v5 v2=exp(v0);v7=v2 * v0;v9=cos(v7);v8=cos(v9);return: v8 Here v0 is always the input, and the variable specified by return (respectively v5 and v8 in the examples) is the output, therefore it actually represent univariate functions f: R → R. Note that a correct program should, besides the context-free grammar specified in Appendix A.1, also respect the semantic constraints. For example, a variable should be defined before being referenced. We randomly generate 130, 000 programs, where each consisting of 1 to 5 valid statements. Here the maximum number of decoding steps T = 80. We hold 2000 programs out for testing and the rest for training and validation. DISPLAYFORM0 For molecule experiments, we use the same dataset as in BID18. It contains 250, 000 SMILES strings, which are extracted from the ZINC database BID11. We use the same split as BID18, where 5000 SMILES strings are held out for testing. Regarding the syntax constraints, we use the grammar specified in Appendix A.2, which is also the same as BID18. Here the maximum number of decoding steps T = 278.For our SD-VAE, we address some of the most common semantics:Program semantics We address the following: a) variables should be defined before use, b) program must return a variable, c) number of statements should be less than 10.Molecule semantics The SMILES semantics we addressed includes: a) ringbonds should satisfy cross-serial dependencies, b) explicit valence of atoms should not go beyond permitted. For more details about the semantics of SMILES language, please refer to Appendix A.3. We use the held-out dataset to measure the reconstruction accuracy of VAEs. For prior validity, we first sample the latent representations from prior distribution, and then evaluate how often the model can decode into a valid structure. Since both encoding and decoding are stochastic in VAEs, we follow the Monte Carlo method used in BID18 to do estimation: a) reconstruction: for each of the structured data in the held-out dataset, we encode it 10 times and decoded (for each encoded latent space representation) 25 times, and report the portion of decoded structures that are the same as the input ones; b) validity of prior: we sample 1000 latent representations z ∼ N (O, I). For each of them we decode 100 times, and calculate the portion of 100,000 decoded that corresponds to valid Program or SMILES sequences. Program We show in the left part of Table 1 that our model has near perfect reconstruction rate, and most importantly, a perfect valid decoding program from prior. This huge improvement is due to our model that utilizes the full semantics that previous work ignores, thus in theory guarantees perfect valid prior and in practice enables high reconstruction success rate. For a fair comparison, we run and tune the baselines in 10% of training data and report the best . In the same place we also report the reconstruction successful rate grouped by number of statements. It is shown that our model keeps high rate even with the size of program growing. SMILES Since the settings are exactly the same, we include CVAE and GVAE directly from BID18. We show in the right part of Table 1 that our model produces a much higher rate of successful reconstruction and ratio of valid prior. FIG6 in Appendix C.2 also demonstrates some decoded molecules from our method. Note that the we reported have not included the semantics specific to aromaticity into account. If we use an alternative kekulized form of SMILES to train the model, then the valid portion of prior can go up to 97.3%. Figure 4: On the left are best programs found by each method using Bayesian Optimization. On the right are top 3 closest programs found by each method along with the distance to ground truth (lower distance is better). Both our SD-VAE and CVAE can find similar curves, but our method aligns better with the ground truth. In contrast the GVAE fails this task by reporting trivial programs representing linear functions. Finding program In this application the models are asked to find the program which is most similar to the ground truth program. Here the distance is measured by log(1 + MSE), where the MSE (Mean Square Error) calculates the discrepancy of program outputs, given the 1000 different inputs v0 sampled evenly in [−5, 5]. In Figure 4 we show that our method finds the best program to the ground truth one compared to CVAE and GVAE. Molecules Here we optimize the drug properties of molecules. In this problem, we ask the model to optimize for octanol-water partition coefficients (a.k.a log P), an important measurement of druglikeness of a given molecule. As Gómez-Bombarelli et al. FORMULA0 suggests, for drug-likeness assessment log P is penalized by other properties including synthetic accessibility score BID10 ). In Figure 5 we show the the top-3 best molecules found by each method, where our method found molecules with better scores than previous works. Also one can see the molecule structures found by SD-VAE are richer than baselines, where the latter ones mostly consist of chain structure. Figure 5: Best top-3 molecules and the corresponding scores found by each method using Bayesian Optimization. Method LL RMSE LL RMSE CVAE -4.943 ± 0.058 3.757 ± 0.026 -1.812 ± 0.004 1.504 ± 0.006 GVAE -4.140 ± 0.038 3.378 ± 0.020 -1.739 ± 0.004 1.404 ± 0.006 SD-VAE -3.754 ± 0.045 3.185 ± 0.025 -1.697 ± 0.015 1.366 ± 0.023 Table 2: Predictive performance using encoded mean latent vector. Test LL and RMSE are reported. The VAEs also provide a way to do unsupervised feature representation learning BID11. In this section, we seek to to know how well our latent space predicts the properties of programs and molecules. After the training of VAEs, we dump the latent vectors of each structured data, and train the sparse Gaussian Process with the target value (namely the error for programs and the drug-likeness for molecules) for regression. We test the performance in the held-out test dataset. In Table 2, we report the in Log Likelihood (LL) and Regression Mean Square Error (RMSE), which show that our SD-VAE always produces latent space that are more discriminative than both CVAE and GVAE baselines. This also shows that, with a properly designed decoder, the quality of encoder will also be improved via end-to-end training. Similarity Metric MorganFp MACCS PairFp TopologicalFp GVAE 0.92 ± 0.10 0.83 ± 0.15 0.94 ± 0.10 0.71 ± 0.14 SD-VAE 0.92 ± 0.09 0.83 ± 0.13 0.95 ± 0.08 0.75 ± 0.14 Table 3: Diversity as statistics from pair-wise distances measured as 1 − s, where s is one of the similarity metrics. So higher values indicate better diversity. We show mean ± stddev of 100 2 pairs among 100 molecules. Note that we report from GVAE and our SD-VAE, because CVAE has very low valid priors, thus completely only failing this evaluation protocol. Inspired by BID1, here we measure the diversity of generated molecules as an assessment of the methods. The intuition is that a good generative model should be able to generate diverse data and avoid mode collapse in the learned space. We conduct this experiment in the SMILES dataset. We first sample 100 points from the prior distribution. For each point, we associate it with a molecule, which is the most frequent occurring valid SMILES decoded (we use 50 decoding attempts since the decoding is stochastic). We then, with one of the several molecular similarity metrics, compute the pairwise similarity and report the mean and standard deviation in Table 3. We see both methods do not have the mode collapse problem, while producing similar diversity scores. It indicates that although our method has more restricted decoding space than baselines, the diversity is not sacrificed. This is because we never rule-out the valid molecules. And a more compact decoding space leads to much higher probability in obtaining valid molecules. We seek to visualize the latent space as an assessment of how well our generative model is able to produces a coherent and smooth space of program and molecules. Program Following Bowman et al. FORMULA0, we visualize the latent space of program by interpolation between two programs. More specifically, given two programs which are encoded to p a and p b respectively in the latent space, we pick 9 evenly interpolated points between them. For each point, we pick the corresponding most decoded structure. In TAB3 we compare our with previous works. Our SD-VAE can pass though points in the latent space that can be decoded into valid programs without error and with visually more smooth interpolation than previous works. Meanwhile, CVAE makes both syntactic and semantic errors, and GVAE produces only semantic errors (reference of undefined variables), but still in a considerable amount. CVAE GVAE SD-VAE v6=cos;v8=exp;v2=v8*v0;v9=v2/v6;return:v9 v6=cos;v8=exp;v2=v8*v0;v9=v2/v6;return:v9 v6=cos;v8=exp;v2=v8*v0;v9=v2/v6;return:v9 v8=cos;v7=exp;v5=v7*v0;v9=v9/v6;return:v9 v3=cos;v6=exp;v6=v8*v0;v9=v2/v6;return:v9 v6=cos;v8=exp;v2=v8*v0;v9=v2/v6;return:v9 v4=cos;v8=exp;v2=v2*v0;v9=v8/v6;return:v9 v3=cos;v6=2/8;v6=v5*v9;v5=v8v5;return:v5 v6=cos;v8=exp;v3=v8*v0;v9=v3/v8;return:v9 v6=cos;v8=sin;v5=v4*1;v5=v3/v4;return:v9 v3=cos;v6=2/9;v6=v5+v5;v5=v1+v6;return:v5 v6=cos;v8=v6/9;v1=7*v0;v7=v6/v1;return:v7 v9=cos;v7=sin;v3=v1*5;v9=v9+v4;return:v9 v5=cos;v1=2/9;v6=v3+v2;v2=v5-v6;return:v2 v6=cos;v8=v6/9;v1=7*v6;v7=v6+v1;return:v7 v6=cos;v3=sin (10;;v9=8*v8;v7=v2/v2;return:v9 v5=sin;v3=v1/9;v6=v3-v3;v2=v7-v6;return:v2 v6=cos;v8=v6/9;v1=7*v8;v7=v6+v8;return:v7 v5=exp(v0;v4=sin(v0);v3=8*v1;v7=v3/v2;return:v9 v1=sin;v5=v5/2;v6=v2-v5;v2=v0-v6;return:v2 v6=exp(v0);v8=v6/2;v9=6*v8;v7=v9+v9;return:v7 v5=exp(v0);v1=sin;v5=2*v3;v7=v3+v8;return:v7 v1=sin;v7=v8/2;v8=v7/v9;v4=v4-v8;return:v4 v6=exp(v0);v8=v6-4;v9=6*v8;v7=v9+v8;return:v7 v4=exp(v0);v1=v7-8;v9=8*v3;v7=v3+v8;return:v7 v8=sin;v2=v8/2;v8=v0/v9;v4=v4-v8;return:v4 v6=exp(v0);v8=v6-4;v9=6*v6;v7=v9+v8;return:v7 v4=exp(v0);v9=v6-8;v6=2*v5;v7=v3+v8;return:v7 v6=exp(v0);v2=v6-4;v8=v0*v1;v7=v4+v8;return:v7 v6=exp(v0);v8=v6-4;v4=4*v6;v7=v4+v8;return:v7 v6=exp(v0);v8=v6-4;v4=4*v8;v7=v4+v8;return:v7 v6=exp(v0);v8=v6-4;v4=4*v8;v7=v4+v8;return:v7 v6=exp(v0);v8=v6-4;v4=4*v8;v7=v4+v8;return:v7 Observe that when a model passes points in its latent space, our proposed SD-VAE enforces both syntactic and semantic constraints while making visually more smooth interpolation. In contrast, CVAE makes both kinds of mistakes, GVAE avoids syntactic errors but still produces semantic errors, and both methods produce subjectively less smooth interpolations. SMILES For molecules, we visualize the latent space in 2 dimensions. We first embed a random molecule from the dataset into latent space. Then we randomly generate 2 orthogonal unit vectors A. To get the latent representation of neighborhood, we interpolate the 2-D grid and project back to latent space with pseudo inverse of A. Finally we show decoded molecules. In FIG5, we present two of such grid visualizations. Subjectively compared with figures in BID18, our visualization is characterized by having smooth differences between neighboring molecules, and more complicated decoded structures. DISPLAYFORM0 In this paper we propose a new method to tackle the challenge of addressing both syntax and semantic constraints in generative model for structured data. The newly proposed stochastic lazy attribute presents a the systematical conversion from offline syntax and semantic check to online guidance for stochastic generation, and empirically shows consistent and significant improvement over previous models, while requiring similar computational cost as previous model. In the future work, we would like to explore the refinement of formalization on a more theoretical ground, and investigate the application of such formalization on a more diverse set of data modality. Since our proposed SD-VAE differentiate itself from previous works (CVAE, GVAE) on the formalization of syntax and semantics, we therefore use the same deep neural network model architecture for a fair comparison. In encoder, we use 3-layer one-dimension convolution neural networks (CNNs) followed by a full connected layer, whose output would be fed into two separate affine layers for producing µ and σ respectively as in reparameterization trick; and in decoder we use 3-layer RNNs followed by a affine layer activated by softmax that gives probability for each production rule. In detail, we use 56 dimensions the latent space and the dimension of layers as the same number as in BID18. As for implementation, we use Kusner et al. FORMULA0's open sourced code for baselines, and implement our model in PyTorch framework 3.In a 10% validation set we tune the following hyper parameters and report the test from setting with best valid loss. For a fair comparison, all tunings are also conducted in the baselines. We use ReconstructLoss + αKLDivergence as the loss function for training. A natural setting is α = 1, but BID18 suggested in their open-sourced implementation 4 that using α = 1/LatentDimension would leads to better . We explore both settings. The Bayesian optimization is used for searching latent vectors with desired target property. For example, in symbolic program regression, we are interested in finding programs that can fit the given input-output pairs; in drug discovery, we are aiming at finding molecules with maximum drug likeness. To get a fair comparison with baseline algorithms, we follow the settings used in BID18.Specifically, we first train the variational autoencoder in an unsupervised way. After obtaining the generative model, we encode all the structures into latent space. Then these vectors and corresponding property values (i.e., estimated errors for program, or drug likeness for molecule) are used to train a sparse Gaussian process with 500 inducing points. This is used later for predicting properties in latent space. Next, 5 iterations of batch Bayesian optimization with the expected improvement (EI) heuristic is used for proposing new latent vectors. In each iteration, 50 latent vectors are proposed. After the proposal, the newly found programs/molecules are then added to the batch for next round of iteration. During the proposal of latent vectors in each iteration, we perform 100 rounds of decoding and pick the most frequent decoded structures. This helps regulates the decoding due to randomness, as well as increasing the chance for baselines algorithms to propose valid ones. We visualize some reconstruction of SMILES in FIG6. It can be observed that, in most cases the decoder successfully recover the exact origin input. Due to the stochasticity of decoder, it may have some small variations.
A new generative model for discrete structured data. The proposed stochastic lazy attribute converts the offline semantic check into online guidance for stochastic decoding, which effectively addresses the constraints in syntax and semantics, and also achieves superior performance
956
scitldr
Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset. Reasoning and inference are central to both human and artificial intelligence. Natural language inference (NLI) is concerned with determining whether a natural-language hypothesis h can be inferred from a natural-language premise p. Modeling inference in human language is very challenging but is a basic problem towards true natural language understanding -NLI is regarded as a necessary (if not sufficient) condition for true natural language understanding BID20.The most recent years have seen advances in modeling natural language inference. An important contribution is the creation of much larger annotated datasets such as SNLI BID5 and MultiNLI BID37. This makes it feasible to train more complex inference models. Neural network models, which often need relatively large amounts of annotated data to estimate their parameters, have shown to achieve the state of the art on SNLI and MultiNLI BID5 BID25 BID27 BID30 BID26 BID8 BID1. While these neural networks have shown to be very effective in estimating the underlying inference functions by leveraging large training data to achieve the best , they have focused on end-toend training, where all inference knowledge is assumed to be learnable from the provided training data. In this paper, we relax this assumption, by exploring whether external knowledge can further help the best reported models, for which we propose models to leverage external knowledge in major components of NLI. Consider an example from the SNLI dataset:• p: An African person standing in a wheat field.• h: A person standing in a corn field. If the machine cannot learn useful or plenty information to distinguish the relationship between wheat and corn from the large annotated data, it is difficult for a model to predict that the premise contradicts the hypothesis. In this paper, we propose neural network-based NLI models that can benefit from external knowledge. Although in many tasks learning tabula rasa achieved state-of-the-art performance, we believe complicated NLP problems such as NLI would benefit from leveraging knowledge accumulated by humans, at least in a foreseeable future when machines are unable to learn that with limited data. A typical neural-network-based NLI model consists of roughly four components -encoding the input sentences, performing co-attention across premise and hypothesis, collecting and computing local inference, and performing sentence-level inference judgment by aggregating or composing local information information. In this paper, we propose models that are capable of leveraging external knowledge in co-attention, local inference collection, and inference composition components. We demonstrate that utilizing external knowledge in neural network models outperforms the previously reported best models. The advantage of using external knowledge is more significant when the size of training data is restricted, suggesting that if more knowledge can be obtained, it may yielding more benefit. Specifically, this study shows that external semantic knowledge helps mostly in attaining more accurate local inference information, but also benefits co-attention and aggregation of local inference. Early work on natural language inference (also called recognizing textual textual) has been performed on quite small datasets with conventional methods, such as shallow methods BID13, natural logic methods BID20, among others. These work already shows the usefulness of external knowledge, such as WordNet BID22, FrameNet BID1, and so on. More recently, the large-scale dataset SNLI was made available, which made it possible to train more complicated neural networks. These models fall into two kind of approaches: sentence encodingbased models and inter-sentence attention-based models. Sentence encoding-based models use Siamese architecture BID7 -the parameter-tied neural networks are applied to encode both the premise and hypothesis. Then a neural network classifier (i.e., multilayer perceptron) is applied to decide the relationship between the two sentence representations. Different neural networks have been utilized as sentence encoders, such as LSTM BID5, GRU BID33, CNN BID23, BiLSTM and its variants BID17 BID9, and more complicated neural networks BID6 BID24 BID1. The advantage of encoding-based models is that the encoders transform sentences into fixed-length vector representations, which can help a wide range of transfer tasks BID12. However, this architecture ignores the local interaction between two sentences, which is necessary in traditional natural language inference procedure BID20.Therefore, inter-sentence attention-based models were proposed to relieve this problem. In this framework, local inference information is collected by the attention mechanism and then fed into neural networks to compose as fixed-sized vectors before the final classification. Many related works follow this route BID29 BID34 BID11 BID27 BID8. Among them, BID29 were the first to propose neural attention-based models for NLI. BID8 proposed an enhanced sequential inference model (ESIM), which is one of the best models so far and regarded as the baseline in this paper. In general, external knowledge have been shown to be effective in a wide range of NLP tasks, including machine translation BID32, language modeling BID0, and dialogue system. For NLI, to the best of our knowledge, we are the first to utilize external knowledge together with neural networks. In this paper, we first show that a neural network equipped with external knowledge obtains further improvement over the already strong baseline, and achieves an accuracy of 88.6% on the SNLI benchmark. Furthermore, we show that the gain is more significant when using less training samples. External knowledge needs to be converted to a numerical representation for enriching natural language inference model. One of approaches to represent external knowledge is using knowledge graph embeddings, such as TransE BID4, TransH BID35, TransG BID38, and so on. However, these kind of approaches usually need to train a knowledge-graph embedding beforehand. In this paper, we propose to use relation features to describe relationship between the words in any word pair, which can be easily obtained from various knowledge graphs, such as WordNet BID22, and Freebase BID2. Specifically, we use WordNet to measure the semantic relatedness of the word in a pair using various relation types, including synonymy, antonymy, hypernymy, and so on. Each of these features is a real number on the interval. The definition and instances of pair features derived from WordNet are indicated in TAB0. The setting of features refers to , but we add a new feature same hypernym, which improve the significantly in our experiments. Intuitively, the synonymy, hypernymy and hyponymy features help model the entailment of word pairs; the antonymy and same hypernym features help model contradiction in word pairs. We regard the vector r ∈ R Dr as the relation feature derived from external knowledge, where D r is 5 in our experiments. The r will be enriched in the neural inference model to capture external semantic knowledge. TAB1 reports some key statistics of the relation features from WordNet. We present here our natural inference models which are composed of the following major components: input encoding, knowledge enriched co-attention, knowledge enriched local inference collection, and knowledge enriched inference composition. Figure 1 shows a high-level view of the architecture. First, the premise and hypothesis are encoded by the input encoding components as context-dependent representations. Second, co-attention is calculated to obtain word-level softalignment between two sentences. Third, local inference information is collected to prepare for final prediction. Fourth, the inference composition component applies aggregation of the whole sentences and makes final prediction based on the fixed-size vector. Among them, external knowledge is regard as the auxiliary component to improve the ability of calculating co-attention, collecting local inference information and composing inference. Figure 1: A high-level view of our neural inference networks. Given two sentence, i.e., the premise "The child is getting a pedicure", and the hypothesis "The kid is getting a manicure", the model needs to predict the relationship among them: entailment, contradiction, or neutral. Given the word sequences of the premise a = (a 1, . . ., a M) and the hypothesis b = (b 1, . . ., b N), where M and N are the lengths of the sentences, the final objective is to predict a label y that indicates the logic relationship between a and b. The formula is y = arg max DISPLAYFORM0 Specifically, "<BOS>" and "<EOS>" are inserted as the first and last token, respectively. First, a and b are embedded into a D e -dimensional vectors [E(a 1),..., E(a M)] and [E(b 1),..., E(b N)] using an embedding matrix E ∈ R De×V, where V is the vocabulary size and E can be initialized with some pre-trained word embeddings from a universal corpus. To represent the words of the premise and hypothesis in a context-dependent way, the two sentences are fed into the encoders to obtain context-dependent hidden states a s and b s. The formula is DISPLAYFORM1 We employ bidirectional LSTMs (BiLSTMs) BID15 as encoders, which is a common choice for natural language. A BiLSTM runs a forward and a backward LSTM on a sequence starting from the left and the right end, respectively. The hidden states generated by these two LSTMs at each time step are concatenated to represent that time step and its context: DISPLAYFORM2 The hidden states of the unidirectional LSTM (h → t or h ← t) is calculated as follows: DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 where σ is the sigmoid function, is the element-wise multiplication of two vectors. DISPLAYFORM9 are parameters to be learned. D is the dimension of the hidden states in the LSTM. The LSTM utilizes a set of gating functions for each input vector x t, i.e., the input gate i t, forget gate f t, and output gate o t, together with a memory cell c t to generate a hidden state h t. In this component, we acquire soft-alignment of word pairs between the premise and hypothesis based on our knowledge-enriched co-attention mechanism. Given the relation features r ij ∈ R Dr between the premise's i-th word and the hypothesis's j-th word from the external knowledge, the co-attention is calculated as e ij = (a DISPLAYFORM0 The function F can be any non-linear or linear function. Here we use F (r ij) = λ1(r ij), where λ is a hyper-parameter tuned on the development set and 1 is the indication function.1(r ij) = 1 if r ij is not zero vector; 0 if r ij is zero vector.Intuitively, the word pairs with semantic relationship in various features are probably aligned together. Soft-alignment is determined by the co-attention matrix e ∈ R M ×N computed in Equation, which is used to obtain the local relevance between the premise and hypothesis. For the hidden state of a word in a premise, i.e., a s i (already encoding the word itself and its context), the relevant semantics in the hypothesis is identified into a context vector a c i using e ij, more specifically with Equation. DISPLAYFORM1, a DISPLAYFORM2 DISPLAYFORM3 where α ∈ R M ×N and β ∈ R M ×N are the normalized attention weight matrices with respect to the 2-axis and 1-axis. The same calculation is performed for each word in the hypothesis, i.e., b DISPLAYFORM4 DISPLAYFORM5 where a heuristic matching trick with difference and element-wise product is used BID23 BID8. The last term in Equation FORMULA0 aims to obtain the local inference relationship between the original vectors (a In this component, we introduce knowledge-enriched inference composition. To determine the overall inference relationship between a premise and a hypothesis, we need to explore a composition layer to compose the local inference vectors (a m and b m) collected above. The formula is DISPLAYFORM0 Here, we also use BiLSTMs as building blocks for the composition layer. The BiLSTMs read local inference vectors (a m and b m) and learn to judge the type of local inference relationship and distinguish crucial local inference vectors for overall sentence-level inference relationship. The responsibility of BiLSTMs in the inference composition layer is completely different from the BiLSTMs in the input encoding layer. Our inference model converts the output hidden vectors of BiLSTMs to a fixed-length vector with pooling operations and puts it into the final classifier to determine the overall inference class. Particularly, besides using mean pooling and max pooling similarly to ESIM BID8, we propose to use weighted pooling based on external knowledge to obtain a fixed-length vector as in Equation FORMULA0. Intuitively, the final prediction is mostly determined by those word pairs appearing in the external knowledge. BID9 uses a similar idea called gated-attention but they do not use external knowledge. DISPLAYFORM1 In our experiments, we regard the function H as a 1-layer feed-forward neural network with ReLU activation function. We concatenate all pooling vectors, i.e., mean, max, and weighted pooling, into a fixed-length vector and then put the vector into a final multilayer perceptron (MLP) classifier. The MLP has a hidden layer with tanh activation and softmax output layer in our experiments. The entire model is trained end-to-end, through minimizing the cross-entropy loss. The Stanford Natural Language Inference (SNLI) dataset BID5 focuses on three basic relationships between a premise and a potential hypothesis: the premise entails the hypothesis (entailment), they contradict each other (contradiction), or they are not related (neutral). We use the same data split as in previous work, and use classification accuracy as the evaluation metric, as in related work. WordNet 3.0 BID22 is used to extract semantic relation features between words, as described in Section 3.1. The words are lemmatized using Stanford CoreNLP 3.7.0 to match words in WordNet, but the input word sequences for the input encoding layer are only tokenized, without lemmatization. We release our code at [xxx] to make it replicatibility purposes. The models are selected on the development set. Some of our training details are as follows: the dimension of the hidden states of LSTMs and word embeddings are 300. The word embeddings are initialized by 300D GloVe 840B BID28, and out-of-vocabulary words among them are initialized randomly. All word embeddings are updated during training. Adam BID16 ) is used for optimization with an initial learning rate 0.0004. The mini-batch size is set to 32. Dropout with a keep rate of 0.5 and early stopping with patience of 7 are used to avoid overfitting. The gradient is clipped with a maximum L2-norm 10. The trade-off λ for calculating co-attention in Equation FORMULA10 is selected in [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50] based on the development set. LSTM BID5 80.6 GRU BID33 81.4 Tree CNN BID23 82.1 SPINN-PI BID6 83.2 NTI BID25 83.4 Intra-Att BiLSTM 84.2 Self-Att BiLSTM BID17 84.2 NSE BID24 84.6 Gated-Att BiLSTM BID9 85.5 DiSAN BID31 85.6 LSTM Att BID29 83.5 mLSTM BID34 86.1 LSTMN BID11 86.3 Decomposable Att BID27 86.8 NTI BID25 87.3 Re-read LSTM BID30 87.5 BiMPM 87.5 btree-LSTM BID26 87.6 DIM BID14 88.0 ESIM BID8 88.0 KIM 88.6 HIM (ESIM+Syntactic TreeLSTM) BID8 88.6 BiMPM (Ensemble) 88.8 DIIN (Ensemble) 88.9 KIM (Ensemble) 89.1 TAB2 shows the of different models on the SNLI dataset. The first group of models use sentence-encoding based approaches. BID5 employs LSTMs as encoders for both the premise and hypothesis into two fixed-size sentence vectors. Then the sentence representation is put into a MLP classifier to predict the final inference relationship. The accuracy on the test set is 80.6%. Many related works follow this framework, using different neural networks as encoders. Their performances are also listed in the first group in TAB2. Among them, gated-Att BiLSTM BID9 achieves an accuracy of 85.5%, which is state of the art for sentenceencoding based approaches. The second group of models uses a cross-sentence attention mechanism, which can obtain softalignment information between cross-sentence word pairs. BID34 proposes a matching-LSTM to compare the inference information of locally-aligned words, and obtains a higher accuracy of 86.1%, even better than the state-of-the-art sentence-encoding models. Other related models are also listed in the second group in TAB2. Among them, ESIM BID8 ) is the previous state-of-the-art system, whose accuracy in test set is 88.0%. The proposed model, namely Knowledge-based Inference Model (KIM), which enriches ESIM with external knowledge, obtains an accuracy of 88.6%. The difference between ESIM and KIM is statistically significant under the one-tailed paired t-test at the 99% significance level. To be best of our knowledge, this is a new state of the art. Our ensemble model, which averages the probability distributions from ten individual single KIMs with different initialization, achieves an even higher accuracy, 89.1%. To compare the importance of external knowledge under different training data scales, we randomly sample different ratio of the whole training set, i.e., 0.8%, 4%, 20% and 100%. "A" indicates adding external knowledge in calculating the co-attention matrix as in Equation FORMULA10, "I" indicates adding external knowledge in collecting local inference information as in Equation FORMULA0, and "C" indicates adding external knowledge in composing inference as in Equation. When we only have restricted training data, i.e., 0.8% training set (about 4,000 samples), our baseline ESIM has a poor accuracy of 62.4%. When we only add external knowledge in calculating co-attention ("A"), the accuracy increases to 66.6% (+ absolute 4.2%). When we only utilize external knowledge in collecting local inference information ("I"), the accuracy has a significant gain, to 70.3% (+ absolute 7.9%). When we only add external knowledge in inference composition ("C"), the accuracy gets a smaller gain to 63.4% (+ absolute 1.0%). The comparison indicates that "I" plays the most important role among the three components in using external knowledge. Moreover, when we compose the three components ("A,I,C"), we obtain the best of 72.6% (+ absolute 10.2%). When we use more training data, i.e., 4%, 20%, 100% of the training set, only utilizing external knowledge in local inference information collected ("I") achieves a significant gain, but "A" or "C" do not bring any significant improvement. The indicate that external semantic knowledge only helps in co-attention and composition when there is limited training data, but always helps in collecting local inference information. Meanwhile, for less training data, λ is usually set to a larger value. For example, the optimal λ tuned on the development set is 20 for 0.8% training set, 2 for the 4% training set, 1 for the 20% training set and 0.2 for the 100% training set. Figure 3 displays the of using different ratio of external knowledge for different training data size. Note that here we only use external knowledge in collecting local inference information, because it always works well for different scale of the training set. Better accuracies are achieved when using more external knowledge. Especially under the condition of restricted training data (0.8%), the model obtains a large gain when using more than half of the external knowledge. Our enriched neural network-based model for natural language inference with external knowledge, namely KIM, achieves a new state-of-the-art accuracy on the SNLI dataset. The model is equipped with external knowledge in the major informal inference components, specifically, in calculating co-attention, collecting local inference, and composing inference. The proposed models of infusing neural networks with external knowledge may also help shed some light on tasks other than NLI, such as question answering and machine translation.
the proposed models with external knowledge further improve the state of the art on the SNLI dataset.
957
scitldr
High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks. To improve accuracy, post-classification methods have been proposed for smoothing of model predictions. However, those approaches require an additional neural network to perform the smoothing operation, which adds overhead to the task. We propose an approach that involves learning deep features directly over neighboring scene images without requiring use of a cleanup model. Our approach utilizes a siamese network to improve the discriminative power of convolutional neural networks on a pair of neighboring scene images. It then exploits semantic coherence between this pair to enrich the feature vector of the image for which we want to predict a label. Empirical show that this approach provides a viable alternative to existing methods. For example, our model improved prediction accuracy by 1 percentage point and dropped the mean squared error value by 0.02 over the baseline, on a disease density estimation task. These performance gains are comparable with from existing post-classification methods, moreover without implementation overheads. Remote sensing scene image analysis is emerging as an important area of research for application of deep learning algorithms. Application areas include land-use land-cover analysis, urban planning, and natural disaster detection. A deep learning task for labeling a scene image is typically formulated as conditional probability of the form in Eq. 1 , , , , , , where l i is label for image patch s i. This formulation is sufficient for problems where spatial situatedness of a scene, which embodies knowledge of semantic likeness between neighborhoods in the geophysical world, is not important. However, for problems which require knowledge of neighborhood the formulation in Eq. 1 becomes inadequate. An example of such a problem would be estimating disease density for a small geographical region of interest, in which case the probability of label l is likely to depend on the labels for neighboring regions due to semantic coherence among them. The problem of how to improve model prediction by leveraging semantic coherence among neighboring scene images has previously been considered in the literature. Previous studies consider the problem as a post-classification task. For example, used a second classifier to do pixel smoothing to refine predictions made by another classifier. Based on a 5x5 window, a filter assigns pixels to the majority class if it had been assigned a different class. , a post-processing architecture is suggested for incorporating structure into image patch prediction. It involves stacking neural networks (NN) such that the output from a previous one becomes input for the next. Idea is for each network to clean up predictions of previous one in order to progressively improve overall accuracy. While improved model performance was achieved by these methods, they have overhead of performing same classification task in at least two stages. In other words, you need a minimum of two NN to perform the same classification task. Unlike post-classification methods, this work considers the problem of improving model accuracy on scene images by exploiting knowledge of neighboring scenes as part of the model training process. We make the assumption that l is conditionally co-dependent on information embedded in scene image i and in other similar, neighboring image j such that the problem is formulated as probability of the form in Eq. 2, where s j is image for a neighboring tile that is most similar to index tile i and P (l i |S i, S j) is observed probability distribution. We used Convolutional Neural Networks (CNN) for modeling the observed probability distribution in Eq. 2. A network architecture is proposed for training our model consisting of four components: a siamese sub-network, a similarity metric learning component, a convolutional network, and a decision layer. The siamese sub-network takes two neighboring scene images as input and extracts features from each. The similarity learning component evaluates how similar the input images are, using the extracted features. If the two input images are found to be similar the convolutional network learns additional features based on the merged feature vector, otherwise those from the index tile are used alone. We implemented the decision layer to perform classification or regression. A baseline model was implemented that takes a single image, the index tile i, as input. Empirical show the proposed model consistently outperforms the baseline. In addition to improving predictive performance with a relatively small training set, our model is fast to train since it uses a pre-trained model for the siamese sub-network. Furthermore, it does not require another NN to smooth out its predictions as is the case with post-classification approaches, while achieving comparable performance gain. In summary,our contributions include the following. 1. We propose an approach for training a probabilistic deep learning model to improve prediction accuracy by exploiting semantic coherence between neighboring tiles in aerial scene images. A CNN architecture is suggested for this purpose. 2. We provide empirical evidence that demonstrates the viability of this approach on a disease density estimation task. 3. Lastly, we discovered an important limitation of the synthetic minority over-sampling technique (SMOTE). This method fails when used for oversampling an under-represented class whereby knowledge of spatial proximity between scene image data points must be preserved, an important requirement under the framework of learning deep features over neighboring scene images introduced in this work. Remote sensing scene images represent a difficult problem for deep learning algorithms due to high intra-class diversity and inter-class similarity . Previous efforts to address this problem is presented first, followed by recent use of siamese networks to improve discriminative ability of CNN. One of the earliest works aimed at improving model accuracy by exploiting semantic coherence between neighboring images is by. The problem addressed there is'salt and pepper' noise in classified satellite images. The authors employ a multi-layer perceptron (MLP) consisting of five hidden units to classify low resolution (30m) satellite scene images. Their proposed network has four output units corresponding to four land-use classes. The sigmoid activation function was used with both hidden and output units. The network had seven input units, each corresponding to one of seven Landsat TM spectral bands. It generates two separate channels when classifying an input image: one for class prediction and the other representing how confident the classification is. A post-classification smoothing operation is then performed using a two-layer NN whose input is the classified image within a 5 x 5 window. For each pixel, the predicted class and confidence information are used as input to decide its new class belonging by applying a majority filter. Gain in classification accuracy was 2.9 percent for the network whose input data was enhanced with texture information (and 4.9 percent for the one without enhancement). introduce indirect dependencies between outputs of a model by using knowledge of structure (e.g. shape) to resolve the issue of disconnected blobs and holes or gaps in predicted building and road network maps. The architecture used consists of stacking NN, each using as input the outputs of a previous network. Precisely, letM 0 be the map predicted by a model. The ith level cleanup NN (f i) takes a w s x w s patch ofM i−1 and outputs a w m x w m patch ofM i. The model f i is trained by minimizing the negative log likelihood on patches of the observed mapM, just like other NN in the architecture. Thus, the ith cleanup NN improves predictions of the previous NN f i−1. The authors extended this idea to Conditional Random Fields (CRF) i.e. pairwise lattice CRF to introduce explicit dependencies between pairs of neighboring pixels. For both NN and CRF, an average gain of 0.0195 was achieved in precision and recall break-even points. The above works demonstrate that post-classification smoothing operations can improve performance of deep learning models. However, a draw back of those approaches is the necessary requirement to implement additional models to perform smoothing operations. Therefore, approaches that sidestep this requirement would be a welcome alternative. Another innovation that inspired the current work is siamese neural networks (SNN) introduced by and. A SNN architecture consists of two identical (shared weight) sub-networks merged by the same function at their outputs. During training each sub-network extracts features from one of two concurrent inputs, after which the joining function evaluates whether the input pair is similar based on a distance metric, such as cosine of angle between the pair of feature vectors. Output from the SNN is a decision score e.g., 1 for similar image pair, 0 otherwise. SNN have recently been used for remote sensing scene image understanding. used a siamese architecture to improve discriminative ability of CNN for satellite scene image classification. The architecture consists of two identical CNN models, 3 additional convolutional layers, and one square non-parametric layer. Each branch of the siamese network is responsible for extracting features from each dual input image, giving feature vectors f 1 and f 2. Two of the convolutional layers take these feature vectors as input and perform additional learning before a softmax function predicts the label for each input image. On the other hand, the square layer takes f 1 and f 2 as input and outputs the tensor f s = (f 1 − f 2) 2 which is used to measure similarity between the input pair. The model thus produces three outputs: predicted label for each input image and a score for their similarity. More precisely, feature discrimination enhancement is achieved in the square layer by imposing a metric learning regularization term that minimizes Euclidean distance between a similar image pair and maximizes that for a dissimilar pair (Eq. 3), where D(x i, x j) is distance between a pair of input images. A margin τ is set to separate as far apart as possible, the similar image pairs from dissimilar ones in feature space. If (x i, x j) is from the same scene class the distance between them is less than τ, otherwise it is greater than τ. Let label for training pair (x i, x j) be (y i, y j), respectively. The formulation for Euclidean distance between them is given by (Eq. 4), This model only minimizes feature distance between similar input pairs through the margin τ to achieve feature discrimination. Therefore, the model in optimizes for two objective functions: a distance loss function and a regularization term as shown in Eq. 5, Of the three pre-trained models experimented with (AlexNet, VGG-16, and ResNet-50) highest feature discrimination accuracy gain (1.14 percentage point) was achieved with a Siamese AlexNet model on the challenging NWPU-RESIC45 benchmark data set. reported a 1.53 percentage point gain in accuracy on the same data set. An end-to-end pipeline is proposed consisting of four components: a siamese sub-network, similarity metric learning function, convolutional network, and decision layer. The siamese sub-network takes a pair of scene images s i (tile we want to predict a label for) and s j (other tile neighboring s i) as input and extracts features from each, outputting feature vectors f i and f j, respectively. The similarity function takes as input the two feature vectors and learns a similarity metric between them. If the feature vector pair (f i, f j) is found to be similar they are merged and the ing feature vector used to train the convolutional network, otherwise training is done using (f i) alone. Output from the convolutional network is passed as input to the decision layer to predict a label (or to regress a numerical value) for image patch s i. The proposed and baseline architectures are shown in Figures 1a and 1b, respectively. To implement the siamese sub-network we explored two state-of-the-art pre-trained CNN models i.e. Xception net and ResNet-50 in a transfer learning (fine-tuning) strategy since we had a small training set. The architecture of Xception model is based on separating learning of space-wise and channel-wise feature representations. The is better performance over architectures such as Inception V3, ResNet-152, and VGG-16 in terms of computational speed and classification accuracy. On the other hand, ResNet architectures address the problem of vanishing/exploding gradients in deep neural network architectures by inserting shortcut connections between convolutional blocks. The ResNet model has benefit of enabling deeper architectures to be utilized in CNN models since these provide higher accuracy rates than shallow architectures. The similarity function takes as input feature vector (f i, f j) to evaluate how similar the input image pair is. This unit outputs one of a binary decision score i.e neighboring input pair (s i, s j) is similar or dissimilar. The optimization function used here works by minimizing feature distance between similar images. Following work in , given a pair of input image s i and s j the distance between them is calculated using Eq. 3. A margin τ separates similar image features from dissimilar ones such that D(f i, f j) < τ for a similar pair, D(f i, f j) > τ otherwise. We also optimized for two objective functions as defined in Eq. 5. An important question to consider is how to identify a neighboring image that is semantically closest in likeness to index tile i, over which to co-learn features. For any image patch in the middle region of a w x h scene image S where w≥3 and h≥3, there will be eight such tiles to choose from, considering four adjacent and diagonal neighbors. For edge tiles there will be five while for corner tiles there will be three neighbors, Figure 2 in Appendix. We can determine neighbors of the tile at location i by specifying the maximum Harversine distance within which the centroid coordinates of all neighbors will lie. Take as an example tiles of size 250m 2 in a 3 x 3 scene image. The centroid of all eight neighbors of the center tile would fall within a maximum distance of 350m from its own centroid. We can evaluate how similar tile at i is to its neighbor at location j using Euclidean distance. If at least one is similar, we select the neighbor with least distance to be most similar. Given that D(s i, s j) < τ for images from same scene class, a neighboring image is most similar to s i if it has the smallest distance Eq. 6, whereŝ i is neighboring image most similar to index tile at location i. In the current analysis however, we consider only the preceding neighbor to tile at location i (image s i−1) for similarity analysis. The loss function used is binary cross-entropy, defined in Eq. 7. This network is trained using features ing from merging feature vector pair (f i, f j) if respective input image pair (s i, s j) is found to be similar and it outputs feature vector f m (f i, f j). If the input pair is dissimilar, only the feature vector f i is fed to the convolutional network, which outputs f i (f i). Our implementation used three dense layers in the convolutional network. Merge operation used is feature averaging. For a classification task we learn the parameters (weights) of the CNN by minimizing the negative log likelihood of the training data under our model. For the multi-class problem considered in this work the negative log likelihood under the model in Eq. 2 assumes the form of a cross-entropy between the probability distribution for observed labels l and the predicted label probabilitiesŷ (Eq. 8), where M, the number of classes, is 3. The outer sum of objective function L is over all training samples. Stochastic gradient descent with mini-batches is used for optimizing L. Our implementation for the current problem uses a fully connected network (FCN) with a 3-way softmax classifier. Taken as a regression task, we optimized for the mean absolute error (MAE) loss and mean squared error (MSE). The form of MAE we used is given by Eq. 9, where M, the number of features to be predicted, is 2. For MSE the loss function is given by Eq. 10, We applied our method to the task of estimating disease density from satellite scene imagery. Below we describe data sets and methods used. A number of data sets were used in our experiments. The epidemic data consists of monthly disease case counts aggregated by sub-county for year 2015. We dis-aggregated the data to 250m 2 grids, our geospatial unit of analysis, using population data from. The latter data gives estimate of people living in a 30m 2 grid based on recent census data and high resolution (0.5m 2) satellite imagery. The data was up-sampled to 250m 2 grid to be consistent with our spatial unit of analysis. The population data was used to create a weighting scheme for dis-aggregating epidemic data. The building (housing) concentration data consists of satellite imagery extracted from Google Static Maps API using method in. It consists of land-use type buildings i.e. objects of interest in a satellite scene image is buildings or houses. The housing data was used as input to train the model, as proxy for indoor overcrowding -a known risk factor for our case study disease. All input data was pre-processed into feature vectors and normalized to value range by min-max scaling or (-1,1) by standardizing (zero mean, unit variance). We created disease density classes out of the epidemic data to make it a classification task. We did this by binning the normalized data such that each 250m 2 grid assumed a class value depending on which bin its disease density lies in. Following the procedure in for binning population data, we created a matrix C where an entry C i = 0 if 5.74e − 05 ≤ d i ≤ 1.52e − 4, 1 if 1.52e − 4 < d i ≤ 2.46e − 4, and 2 if 2.46e − 4 < d i ≤ 3.4e − 4, where d i is normalized disease density. The classes correspond to the semantic labels 0, 1, 2 for high, low, and moderate disease density with 3,628, 6,472, and 1,970 membership count, respectively. Let D be a grid of disease case counts for a region of interest at time t, C grid of disease density class label values, and S grid of housing concentration data. For every disease case count value d i and density class label value c i there is an associated housing concentration image s i. We formulated the learning task as estimating unknown function f based on the idea of learning deep features over neighboring scene images (one neighbor in this case) using Eq. 11 (Figure 1a), where c i is estimated disease density for a 250m 2 patch of land on earth's surface (approx. 224m 2 pixels for an image), represented by image s i. s j is image of a neighboring scene that is most similar to image of tile at location i, identified using method in Eq. 6. We used CNN to estimate function f because the mapping from input data to disease incidence estimate is non-linear, noisy, and dependent on semantic content of input data. To train our model we used transfer learning by fine-tuning two pre-trained models on our data set i.e., Xception net and ResNet-50. Transfer learning eliminates network architecture design time while often achieving higher accuracy rates than training a model from scratch. We applied the same regularization methods and other hyperparameter values (Table 4 in Appendix) to ensure uniformity across the proposed and baseline network architectures. Once a satisfactory model was achieved, it was trained on a combined data set (12,070 images) of training and validation before evaluating on the test set. Evaluation for similarity metric learning for our proposed model built on top of ResNet-50 and Xception net are shown in Table 1. ResNet-50 performed better than Xception net on precision (0.39 vs. 0.70) and recall (0.10 vs. 0.60) on the task of detecting whether or not a pair of satellite scene images is similar, even though the overall performance is low for both pre-trained models (maximum OA score of 83 percent got with Xception net). Performance for the baseline and proposed model are shown in Table 2. Overall, the Xception based model performed better than the ResNet-50 based one on the task of estimating disease density from housing satellite scene data when performance is measured using precision, recall, and F1-Score metrics. However, both models generally performed poorly, achieving a maximum overall accuracy score of 44 percent for ResNet-50. Our proposed model however, consistently out performed the baseline across both pre-trained models (scores marked with bold). For example, using Xception net as the Siamese network our model achieved precision score of 0.15 vs. 0.13, recall of 0.39 vs. 0.26, and f1-score of 0.22 vs. 0.18 for the baseline model, respectively. A similar performance trend is observed for the model based on ResNet-50. Confusion matrix plots for baseline and our model built using Xception net are shown in Figure 3a and 3b, respectively (Appendix). On the other hand Figure 4 shows AUROC plots. Both show that our model performed better than the baseline on medium and low disease density classes with 58 vs. 48 percent and 47 vs. 37 percent, respectively. However, the model performed worse than the baseline on high disease density class (0 vs. 13 percent, respectively). Generally, the from both models are as good as chance. Table 3 gives MAE and MSE scores for baseline and proposed model. Overall, the model built using ResNet-50 performed better than the one built with Xception net when used to estimate disease density from housing concentration satellite scene image data. Again, our proposed model performed consistently (even though marginally) better than the baseline (scores marked with bold). It achieved MAE score of 0.35 vs. 0.38 and MSE score of 0.18 vs. 0.20 for the baseline, respectively. A similar trend is observed for the model built with Xception net as base. Our model performed better than the baseline in both the classification and regression tasks for disease density estimation. For example, our model achieved 1 percentage point gain in accuracy over the baseline model. While a gain as of deploying a siamese network to boost discriminative power of CNN for aerial scene image classification is consistent with findings in previous studies overall from our model are poor. For instance, our model was only able to attain a maximum overall accuracy of 34 percent on the classification task. We would like to pin these poor to a combination of three factors. First, the small data set used to train our model (12,070 images) could have impacted accuracy negatively, despite the use of regularization methods. It is therefore, possible that our model suffered the problem of overfitting. Secondly, our data set was unbalanced. It is likely that the extra parameter we introduced in the loss function to weight classes by giving higher importance to under-represented classes did not work as expected. The could have been that our model did not learn all the necessary features required to make a prediction but rather could have resorted to guessing the output and hence, failing to generalize well over the test set. Class imbalance in our data set could also have negatively affected feature correlation which in turn could have reduced model performance. Besides, well-known methods for mitigating sample size bias in imbalanced data sets, for example by over-sampling under-represented classes could not be applied directly to our data set without modifying the algorithm. That is because it was not immediately clear how to preserve spatial proximity between neighboring tiles, an idea that is central to learning deep features over neighboring scene images. However, despite the low overall performance by our model we have been able to demonstrate that it is possible to improve model accuracy by learning deep features over neighboring scene images in a disease density estimation task. Figure 2: Finding a neighboring image j that is semantically most similar to image i.
Approach for improving prediction accuracy by learning deep features over neighboring scene images in satellite scene image analysis.
958
scitldr
Sparsely available data points cause a numerical error on finite differences which hinder to modeling the dynamics of physical systems. The discretization error becomes even larger when the sparse data are irregularly distributed so that the data defined on an unstructured grid, making it hard to build deep learning models to handle physics-governing observations on the unstructured grid. In this paper, we propose a novel architecture named Physics-aware Difference Graph Networks (PA-DGN) that exploits neighboring information to learn finite differences inspired by physics equations. PA-DGN further leverages data-driven end-to-end learning to discover underlying dynamical relations between the spatial and temporal differences in given observations. We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations. Modeling real world phenomena, such as climate observations, traffic flow, physics and chemistry simulation (; ; ; de ; ;), is important but extremely challenging. While deep learning has achieved remarkable successes in prediction tasks by learning latent representations from data-rich applications such as image recognition , text understanding , and speech recognition, we are confronted with many challenging scenarios in modeling natural phenomena by deep neural networks when a limited number of observations are only available. Particularly, the sparsely available data points cause substantial numerical error and the limitation requires a more principled way to redesign deep learning models. Although many works have been proposed to model physics-simulated observations using deep learning, many of them are designed under the assumption that input is on a continuous domain. For example, Raissi et al. (2017a; proposed Physics-informed neural networks (PINNs) to learn nonlinear relations between input (spatial-and temporal-coordinates (x, t)) and output simulated by a given PDE. Since Raissi et al. (2017a; use the coordinates as input and compute derivatives based on the coordinates to represent a given PDE, the setting is only valid when the data are continuously observed over spatial and temporal space. Under the similar direction of proposed a method to leverage the nonlinear diffusion process for image restoration. de incorporated the transport physics (advection-diffusion equation) with deep neural networks for forecasting sea surface temperature by extracting the motion field. introduced Deep Lagrangian Networks specialized to learn Lagrangian mechanics with learnable parameters. proposed a physicsinformed regularizer to impose data-specific physics equations. In common, the methods in; de; are not efficiently applicable to sparsely discretized input as only a small number of data points are available and continuous properties on given space are not easily recovered. It is inappropriate to directly use continuous differential operators to provide local behaviors because it is hard to approximate the continuous derivatives precisely with the sparse points (; ;). Furthermore, they are only applicable when the specific physics equations are explicitly given and still hard to be generalized to incorporate other types of equations. As another direction to modeling physics-simulated data, proposed PDE-Net which uncovers the underlying hidden PDEs and predicts the dynamics of complex systems. derived new CNNs: parabolic and hyperbolic CNNs based on ResNet architecture motivated by PDE theory.; are flexible to uncover hidden physics from the constrained kernels, it is still restrictive to a regular grid where the proposed constraints on the learnable filters are easily defined. Reasoning physical dynamics of discrete objects has been actively studied; ) as the appearance of graph-based neural networks (; ;). Although these models can handle sparsely located data points without explicitly given physics equations, they are purely data-driven so that the physics-inspired inductive bias, exploiting finite differences, is not considered at all. In contrast, our method consists of physics-aware modules allowing efficiently leveraging the inductive bias to learn spatiotemporal data from the physics system. In this paper, we propose Physics-aware Difference Graph Networks (PA-DGN) whose architecture is motivated to leverage differences of sparsely available data from the physical systems. The differences are particularly important since most of the physics-related dynamic equations (e.g., Navier-Stokes equations) handle differences of physical quantities in spatial and temporal space instead of using the quantities directly. Inspired by the property, we first propose Spatial Difference Layer (SDL) to efficiently learn the local representations by aggregating neighboring information in the sparse data points. The layer is based on Graph Networks (GN) as it easily leverages structural features to learn the localized representations and the parameters for computing the localized features are shared. Then, the layer is connected with Recurrent Graph Networks (RGN) to be combined with temporal difference which is another core component of physics-related dynamic equations. PA-DGN is applicable to various tasks and we provide two representative tasks; the approximation of directional derivatives and the prediction of graph signals. • We tackle a limitation of the sparsely discretized data which cause numerical error to model the physical system by proposing Spatial Difference Layer (SDL) for efficiently exploiting neighboring information under the limitation of sparsely observable points. • We combine SDL with Recurrent Graph Networks to build PA-DGN which automatically learns the underlying spatiotemporal dynamics in graph signals. • We verify that PA-DGN is effective in approximating directional derivatives and predicting graph signals in synthetic data. Then, we conduct exhaustive experiments to predict climate observations from land-based weather stations and demonstrate that PA-DGN outperforms other baselines. In this section, we introduce the building module used to learn spatial differences of graph signals and describe how the module is used to predict signals in the physics system. As approximations of derivatives in continuous domain, difference operators have been used as a core role to compute numerical solutions of (continuous) differential equations. Since it is hard to derive closed-form expressions of derivatives in real-world data, the difference operators have been considered as alternative tools to describe and solve PDEs in practice. The operators are especially important for physics-related data (e.g., meteorological observations) because the governing rules behind the observations are mostly differential equations. Graph signal Given a graph G = (V, E) where V is a set of vertices V = {1, . . ., N v} and E a set of edges E ⊆ {(i, j)|i, j ∈ V} (|E| = N e), graph signals on all nodes at time t are f (t) ∈ R Nv where f: V → R. In addition, graph signals on edges can be defined similarly, F (t) ∈ R Ne where F: E → R. Note that both signals can be multidimensional. Gradient on graph The gradient (∇) of a function on nodes of a graph is represented by finite difference where L 2 (V) and L 2 (E) denote vector spaces of node/edge functions, respectively. The gradients on a graph provide finite differences of graph signals and they become corresponding edge (i, j) features. Laplace-Beltrami operator Laplace-Beltrami operator (or Laplacian, ∆) in graph domain is defined as This operator is usually regarded as a matrix form in other literature, L = D − A where A is an adjacency matrix and D = diag(j:j =i A ij) is a degree matrix. According to , the gradient and Laplacian operator on the triangulated mesh can be discretized by incorporating the coordinates of nodes. To obtain the gradient operator, the per-face gradient of each triangular face is calculated first. Then, the gradient on each node is the area-weighted average of all its neighboring faces, and the gradient on edge (i, j) is defined as the dot product between the per-node gradient value and the direction vector e ij. The Laplacian operator can be discretized with Finite Element Method (FEM): where node j belongs to node i's immediate neighbors (j ∈ N i) and (α j, β j) are two opposing angles of the edge (i, j). While the difference operators are generalized in Riemannian manifolds , there are numerical error compared to those in continuous space and it can be worse when the nodes are spatially far from neighboring nodes because the connected nodes (j ∈ N i) of i's node fail to represent local features around the i-th node. Furthermore, the error is even larger if available data points are sparsely distributed (e.g., sensor-based observations). In other words, since the difference operators are highly limited to immediate neighboring information only, they are unlikely to discover meaningful spatial variations behind the sparse observations. To mitigate the limitation, we propose Spatial Difference Layer (SDL) which consists of a set of parameters to define learnable difference operators as a form of gradient and Laplacian to fully utilize neighboring information: where w ij are the parameters tuning the difference operators along with the corresponding edge direction e ij. Note that the two forms (Eq 1) are associated with edge and node features, respectively. The subscript in ∇ w and ∆ w denotes that the difference operators are functions of the learnable parameters w. w (g) ij and w (l) ij are obtained by integrating local information as follow: While the standard difference operators consider two connected nodes only (i and j) for each edge (i, j), Eq 2 uses a larger view (h-hop) to represent the differences between i and j nodes. Since Graph Networks (GN) are efficient networks to aggregate neighboring information, we use GN for g(·) function and w ij are edge features from the output of GN. Note that Eq 2 can be viewed as a higher-order difference equation because nodes/edges which are multi-hop apart are considered. w ij has a similar role of parameters in convolution kernels of CNNs. For example, while the standard gradient operator can be regarded as an example of simple edge-detecting filters, the operator can be a sharpening filter if w (g1) ij = 1 and w (g2) ij = |Ni|+1 |Ni| for i node and the operators over each edge are summed. In other words, by modulating w ij, it is readily extended to conventional kernels including edge detection or sharpening filters and even further complicated kernels. On top of w ij, the difference forms in Eq 1 make an optimizing process for learnable parameters based on the differences instead of values themselves intentionally. Thus, Eq 1 naturally provides the physics-inspired inductive bias which is particularly effective for modeling physics-related observations. Furthermore, it is possible to increase the number of channels for w (g) ij and w (l) ij to be more expressive. Figure 1 illustrates how the exemplary filters convolve the given graph signals. Difference graph Once the modulated spatial differences (∇ w f (t), ∆ w f (t)) are obtained, they will be concatenated with the current signals f (t) to construct node-wise (z i) and edge-wise (z ij) features and the graph is called a difference graph. Note that the difference graph includes all information to describe spatial variations. Recurrent graph networks Given a snapshot (f (t), F (t)) of a sequence of graph signals, one difference graph is obtained and it is used to predict next graph signals. While a non-linear layer can be used to combine the learned spatial differences to predict the next signals, it is limited to discover spatial relations only among the features in the difference graph. Since many equations describing physics-related phenomena are non-static (e.g., Navier-Stokes equations), we adopt Recurrent Graph Networks (RGN) with a graph state G h as input to combine the spatial differences with temporal variations. RGN returns a graph state (G * h = (h * (v), h * (e) )) and next graph signal z * i and z * ij. The update rule is described as follow: ) for all i ∈ V. z i is an aggregated edge attribute related to the node i. where φ e, φ v are edge and node update functions, respectively, and they can be a recurrent unit (e.g., GRU cell). Finally, the prediction is made through a decoder by feeding the graph signal, z * i and z * ij. Figure 4: Gradients and graph structure of sampled points. Left: the synthetic function is f 1 (x, y) = 0.1x 2 + 0.5y 2. Right: the synthetic function is f 2 (x, y) = sin(x) + cos(y). Learning objective Letf andF denote predictions of the target node/edge signals. PA-DGN is trained by minimizing the following objective: For multistep predictions, L is summed as many as the number of predicting steps. If only one type (node or edge) of signal is given, the corresponding term in Eq 3 is used to optimize the parameters in SDL and RGN simultaneously. To investigate if the proposed spatial difference forms (Eq 1) can be beneficial to learning physicsrelated patterns, we use SDL to two different tasks: approximate directional derivatives and predict synthetic graph signals. Figure 3: Directional derivative on graph As we claimed in Section 2.3, the standard difference forms (gradient and Laplacian) on a graph can become easily inaccurate because they are susceptible to a distance of two points and variations of a given function. To evaluate the applicability of the proposed SDL, we train SDL to approximate directional derivatives on a graph. First, we define a synthetic function and its gradients on 2D space and sample 200 points (x i, y i). Then, we construct a graph on the sampled points by using k-NN algorithm (k = 4). With the known gradient ∇f = (∂f ∂x, ∂f ∂y) at each point (a node in the graph), we can compute directional derivatives by projecting ∇f to a connected edge e ij (See Figure 3). We compare against four baselines: the finite gradient (FinGrad) Multilayer Perceptron Layer (MLP) Graph Networks (GN) a different form of Eq 1 (One-w). For the finite gradient ((f j − f i)/||x j − x i ||), there is no learnable parameter and it only uses two points. For MLP, we feed (f i, f j, x i, x j) as input to see whether learnable parameters can benefit the approximation or not. For GN, we use distances of two connected points as edge features and function values on the points as node features. The edge feature output of GN is used as a prediction for the directional derivative on the edge. Finally, we modify the proposed form as (∇ w f) ij = w ij * f j − f i. GN and the modified form are used to verify the effectiveness of Eq 1. Note that we define two synthetic functions (Figure 4) which have different property; monotonically increasing from a center and periodically varying. Approximation accuracy As shown in Table 1, the proposed spatial difference layer outperforms others by a large margin. As expected, FinGrad provides the largest error since it only considers two points without learnable parameters. It is found that the learnable parameters can significantly benefit to approximate the directional derivatives even if the input is the same (FinGrad vs. MLP). Note that utilizing neighboring information is generally helpful to learn spatial variations properly. However, simply training parameters in GN is not sufficient and explicitly defining difference, which is important to understand spatial variations, provides more robust inductive bias. One important thing we found is that One-w is not effective as much as GN and it can be even worse than FinGrad. It is because of its limited degree of freedom. As implied in the form (∇ w f) ij = w ij * f j − f i, only one w ij adjusts the relative difference between f i and f j, and this is not enough to learn whole possible linear combinations of f i and f j. The unstable performance supports that the form of SDL is not ad-hoc but more effectively designed. We evaluate PA-DGN on the synthetic data sampled from the simulation of specific convectiondiffusion equations, to provide if the proposed model can predict next signals of the simulated dynamics from observations on discrete nodes only. For the simulated dynamics, we use an equation similar to the one in. where the index i is for pointing the i-th node whose coordinate is Then, we uniformly sample 250 points in the above 2D space. The task is to predict signal values of all points in the future M steps given observed values of first N steps. For our experiments, we choose N = 5 and M = 15. Since there is no a priori graph structure on sampled points, we construct a graph with k-NN algorithm (k = 4) using the Euclidean distance. Figure 5 shows the dynamics and the graph structure of sampled points. To evaluate the effect of the proposed SDL on the above prediction task, we cascade SDL and a linear regression model as our prediction model since the dynamics follows a linear partial differential equation. We compare its performance with four baselines: Vector Auto-Regressor (VAR); Multi-Layer Perceptron (MLP); StandardOP: the standard approximation of differential operators in Section 2.1 followed by a linear regressor; MeshOP: similar to StandardOP but use the discretization on triangulated mesh in Section 2.2 for differential operators. Prediction Performance Table 2 shows the prediction performance of different models measured with mean absolute error. The prediction model with our proposed spatial differential layer outperforms other baselines. All models incorporating any form of spatial differential operators (StandardOP, MeshOP and SDL) outperform those without spatial differential operators (VAR and MLP), showing that introducing spatial differences information inspired by the intrinsic dynamics helps prediction. However, in cases where points with observable signal are sparse in the space, spatial differential operators approximated with fixed rules can be inaccurate and sub-optimal for prediction since the locally linear assumption which they are based on no longer holds. Our proposed spatial differential layer, to the contrary, is capable of bridging the gap between approximated difference operators and accurate ones by introducing learnable coefficients utilizing neighboring information, and thus improves the prediction performance of the model. We evaluate the proposed model on the task of predicting climate observations (Temperature) from the land-based weather stations located in the United States. Data and task We sample the weather stations located in the United States from the Online Climate Data Directory of the National Oceanic and Atmospheric Administration (NOAA) and choose the stations which have actively measured meteorological observations during 2015. We choose two geographically close but meteorologically diverse groups of stations: the Western and Southeastern states. We use k-Nearest Neighbor (NN) algorithm (k = 4) to generate graph structures and the final adjacency matrix is A = (A k + A k)/2 to make it symmetric where A k is the output adjacency matrix from k-NN algorithm. Our main task is to predict the next graph signals based on the current and past graph signals. All methods we evaluate are trained through the objective (Eq 3) with the Adam optimizer and we use scheduled sampling for the models with recurrent modules. We evaluate PA-DGN and other baselines on two prediction tasks, 1-step and multistep-ahead predictions. Furthermore, we demonstrate the ablation study that provides how much the spatial derivatives are important signals to predict the graph dynamics. We compare against the widely used baselines (VAR, MLP, and GRU) for 1-step and multistep prediction. Then, we use Recurrent Graph Neural Networks (RGN) to examine how much the graph structure is beneficial. Finally, we evaluate PA-DGN to verify if the proposed architecture (Eq 1) is able to improve the prediction quality. Experiment for the prediction task are summarized in Table 3. Overall, RGN and PA-DGN are better than other baselines and it implies that the graph structure provides useful inductive bias for the task. It is intuitive as the meteorological observations are continuously changing over the space and time and thus, the observations of the closer stations from the i-th station are strongly related to observations at the i-th station. PA-DGN outperforms RGN and the discrepancy comes from the fact that the spatial derivatives (Eq 1) we feed in PA-DGN are beneficial and this finding is expected because the meteorological signals at a certain point are a function of not only its previous signal but also the relative differences between neighbor signals and itself. Knowing the relative differences among local observations is particularly essential to understand physics-related dynamics. For example, Diffusion equation, which describes how physical quantities (e.g., heat) are transported through space over time, is also a function of relative differences of the quantities (df dt = D∆f) rather than values of the neighbor signals. In other words, spatial differences are physics-aware features and it is desired to leverage the features as input to learn dynamics related to physical phenomena. We further investigate if the modulated spatial derivatives (Eq 1) are effectively advantageous compared to the spatial derivatives defined in Riemannian manifolds. First, RGN without any spatial derivatives is assessed for the prediction tasks on Western and Southeastern states graph signals. Note that this model does not use any extra features but the graph signal, f (t). Secondly, we add StandardOP, the discrete spatial differences (Gradient and Laplacian) in Section 2.1 and MeshOP, the triangular mesh approximation of differential operators in Section 2.2 separately as additional signals to RGN. Finally, we incorporate with RGN our proposed Spatial Difference Layer. Table 3 shows the contribution of each component. As expected, PA-DGN provides much higher drops in MAE (3.56%,5.50%,8.51% and 8.73%,8.32%,5.49% on two datasets, respectively) compared to RGN without derivatives and the demonstrate that the derivatives, namely, relative differences from neighbor signals are effectively useful. However, neither RGN with StandardOP nor with MeshOP can consistently outperform RGN. We also found that PA-DGN consistently shows positive effects on the prediction error compared to the fixed derivatives. This finding is a piece of evidence to support that the parameters modulating spatial derivatives in our proposed Spacial Difference Layer are properly inferred to optimize the networks. In this paper, we introduce a novel architecture (PA-DGN) that approximates spatial derivatives to use them to represent PDEs which have a prominent role for physics-aware modeling. PA-DGN effectively learns the modulated derivatives for predictions and the derivatives can be used to discover hidden physics describing interactions between temporal and spatial derivatives. A.1 SIMULATED DATA For the simulated dynamics, we discretize the following partial differential equation similar to the one in to simulate the corresponding linear variable-coefficient convection-diffusion equation on graphs. In a continuous space, we define the linear variable-coefficient convection-diffusion equation as:, with We follow the setting of initialization in:, where N = 9, λ k,l, γ k,l ∼ N 0, 1 50, and k and l are chosen randomly. We use spatial difference operators to approximate spatial derivatives:, where s is the spatial grid size for discretization. Then we rewrite with difference operators defined on graphs:, where Then we replace the gradient w.r.t time in with temporal discretization:, where ∆t is the time step in temporal discretization. Equation is used for simulating the dynamics described by the equation. Then, we uniformly sample 250 points in the above 2D space and choose their corresponding time series of u as the dataset used in our synthetic experiments. We generate 1000 sessions on a 50 × 50 regular mesh with time step size ∆t = 0.01. 700 sessions are used for training, 150 for validation and 150 for test. Here we provide additional details for models we used in this work, including model architecture settings and hyper-parameter settings. Unless mentioned otherwise, all models use a hidden dimension of size 64. • VAR: A vector autoregression model with 2 lags. Input is the concatenated features of previous 2 frames. The weights are shared among all nodes in the graph. • MLP: A multilayer perceptron model with 2 hidden layers. Input is the concatenated features of previous 2 frames. The weights are shared among all nodes in the graph. • GRU: A Gated Recurrent Unit network with 2 hidden layers. Input is the concatenated features of previous 2 frames. The weights are shared among all nodes in the graph. • RGN: A recurrent graph neural network model with 2 GN blocks. Each GN block has an edge update block and a node update block, both of which use a 2-layer GRU cell as the update function. We set its hidden dimension to 73 so that it has the same number of learnable parameters as our proposed model PA-DGN. • RGN(StandardOP): Similar to RGN, but use the output of difference operators in Section 2.1 as extra input features. We set its hidden dimension to 73. • RGN(MeshOP): Similar to RGN(StandardOP), but the extra input features are calculated using opeartors in Section 2.2. We set its hidden dimension to 73. • PA-DGN: Our proposed model. The spatial derivative layer uses a message passing neural network (MPNN) with 2 GN blocks using 2-layer MLPs as update functions. The forward network part uses a recurrent graph neural network with 2 recurrent GN blocks using 2-layer GRU cells as update functions. The numbers of learnable parameters of all models are listed as follows: The number of evaluation runs We performed 3 times for every experiment in this paper to report the mean and standard deviations. Length of prediction For experiments on synthetic data, all models take first 5 frames as input and predict the following 15 frames. For experiments on NOAA datasets, all models take first 12 frames as input and predict the following 12 frames. Training hyper-parameters We use Adam optimizer with learning rate 1e-3, batch size 8, and weight decay of 5e-4. All experiments are trained for a maximum of 2000 epochs with early stopping. All experiments are trained using inverse sigmoid scheduled sampling with the coefficient k = 107. Environments All experiments are implemented with Python3.6 and PyTorch 1.1.0, and are run with NVIDIA GTX 1080 Ti GPUs. In this section, we evaluate the effect of 2 different graph structures on baselines and our models: k-NN: a graph constructed with k-NN algorithm (k = 4); TriMesh: a graph generated with Delaunay Triangulation. All graphs use the Euclidean distance. Table 5 and Table 6 show the effect of different graph structures on the synthetic dataset used in Section 3.2 and the real-world dataset in Section 4.2 separately. We find that for different models the effect of graph structures is not homogeneous. For RGN and PA-DGN, k-NN graph is more beneficial to the prediction performance than TriMesh graph, because these two models rely more on neighboring information and a k-NN graph incorporates it better than a Delaunay Triangulation graph. However, switching from TriMesh graph to k-NN graph is harmful to the prediction accuracy of RGN(MeshOP) since Delaunay Triangulation is a well-defined method for generating triangulated mesh in contrast to k-NN graphs. Given the various effect of graph structures on different models, our proposed PA-DGN under k-NN graphs always outperforms other baselines using any graph structure. Figure 7 provides the distribution of MAEs across the nodes of PA-DGN applied to the graph signal prediction task of the west coast region of the real-world dataset in Section 4.2. As shown in the figure, nodes with the highest prediction error for short-term prediction are gathered in the inner part where the observable nodes are sparse, while for long-term prediction nodes in the area with a limited number of observable points no longer have the largest MAE. This implies that PA-DGN can utilize neighboring information efficiently even under the limitation of sparsely observable points.
We propose physics-aware difference graph networks designed to effectively learn spatial differences to modeling sparsely-observed dynamics.
959
scitldr
Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction. Despite the success of recent works in dealing with inherent nonstationarity and sample complexity, it remains difficult to generalize to unseen environments and to transfer different layers of the policy to other agents. In this paper, we propose a novel HRL architecture, Hierarchical Decompositional Reinforcement Learning (HiDe), which allows decomposition of the hierarchical layers into independent subtasks, yet allows for joint training of all layers in end-to-end manner. The main insight is to combine a control policy on a lower level with an image-based planning policy on a higher level. We evaluate our method on various complex continuous control tasks for navigation, demonstrating that generalization across environments and transfer of higher level policies can be achieved. See videos https://sites.google.com/view/hide-rl Reinforcement learning (RL) has been succesfully applied to sequential-decision making tasks, such as learning how to play video games in Atari , mastering the game of Go or continuous control in robotics (; ; . However, despite the success of RL agents in learning control policies for myopic tasks, such as reaching a nearby target, they lack the ability to effectively reason over extended horizons. In this paper, we consider continuous control tasks that require planning over long horizons in navigation environments with sparse rewards. The task becomes particularly challenging with sparse and delayed rewards since an agent needs to infer which actions caused the reward in a domain where most samples give no signal at all. Common techniques to mitigate the issue of sparse rewards include learning from demonstrations or using enhanced exploration strategies (; ;). Hierarchical Reinforcement Learning (HRL) has been proposed in part to solve such tasks. Typically, a sequential decision making task is split into several simpler subtasks of different temporal and functional abstraction levels . Although the hierarchies would ideally be learned in parallel, most methods resort to curriculum learning (; ; ;). Recent goal-conditioned hierarchical architectures have successfully trained policies jointly via off-policy learning (; ; . However, these methods often do not generalize to unseen environments as we show in Section 5.1. We argue that this is due to a lack of true separation of planning and low-level control across the hierarchy. In this paper, we consider two main problems, namely functional decomposition of HRL architectures in navigation-based domains and generalization of RL agents to unseen environments (figure 1). To address these issues, we propose a novel multi-level HRL architecture that enables both functional decomposition and temporal abstraction. We introduce a 3-level hierarchy that decouples the major roles in a complex navigation task, namely planning and low-level control. The benefit of a modular design is twofold. First, layers have access to only task-relevant information for a predefined task, which significantly improves the generalization ability of the overall policy. Hence, this enables policies learned on a single task to solve randomly configured environments. Second, Figure 1: Navigation environments. The red sphere indicates the goal an agent needs to reach, with the starting point at the opposite end of the maze. The agent is trained on environment a). To test generalization, we use the environments with b) reversed starting and goal positions, c) mirrored maze with reversed starting and goal positions and d) randomly generated mazes. the planning and control layers are modular and thus allow for composition of cross-agent architectures. We empirically show that the planning layer of the hierarchy can be transferred successfully to new agents. During training we provide global environment information only to the planning layer, whereas the full internal state of the agent is only accessible by the control layer. The actions of the top and middle layers are in the form of displacement in space. Similarly, the goals of the middle and lowest layers are relative to the current position. This prevents the policies from overfitting to the global position in an environment and hence encourages generalization to new environments. In our framework (see figure 2), the planner (i.e., the highest level policy π 2) learns to find a trajectory leading the agent to the goal. Specifically, we learn a value map of the environment by means of a value propagation network . To prevent the policy from issuing too ambitious subgoals, an attention network estimates the range of the lower level policy π 0 (i.e., the agent). This attention mask also ensures that the planning considers the agent performance. The action of π 2 is the position which maximizes the masked value map, which serves as goal input to the policy π 1. The middle layer implements an interface between the upper planner and lower control layer, which refines the coarser subgoals into shorter and reachable targets for the agent. The middle layer is crucial in functionally decoupling the abstract task of planning (π 2) from agent specific continuous control. The lowest layer learns a control policy π 0 to steer the agent to intermediate goals. While the policies are functionally decoupled, they are trained together and must learn to cooperate. In this work, we focus on solving long-horizon tasks with sparse rewards in complex continuous navigation domains. We first show in a maze environment that generalization causes challenges for state-of-the-art approaches. We then demonstrate that training with the same environment configuration (i.e., fixed start and goal positions) can generalize to randomly configured environments. Lastly, we show the benefits of functional decomposition via transfer of individual layers between different agents. In particular, we train our method with a simple 2DoF ball agent in a maze environment to learn the planning layer which is later used to steer a more complex agent. The indicate that the proposed decomposition of policy layers is effective and can generalize to unseen environments. In summary our main contributions include: • A novel multi-layer HRL architecture that allows functional decomposition and temporal abstraction for navigation tasks. • This architecture enables generalization beyond training conditions and environments. • Functional decomposition that allows transfer of individual layers across different agents. Learning hierarchical policies has seen lasting interest (; ; ; ; ;), but many approaches are limited to discrete domains or induce priors. More recent works (; ; ; ;) have demonstrated HRL architectures in continuous domains. introduce FeUdal Networks (FUN), which was inspired by feudal reinforcement learn- Figure 2: Our 3-layer HRL architecture. The planning layer π 2 receives a birds eye view of the environment and the agent's position s xy and sets an intermediate target position g 2. The interface layer π 2 splits this subgoal into reachable targets g 1. A goal-conditioned control policy π 0 learns the required motor skills to reach the target g 1 given the agent's joint information s joints. ing . In FUN, a hierarchic decomposition is achieved via a learned state representation in latent space. While being able to operate in continuous state space, the approach is limited to discrete action spaces. introduce hierarchical structure into KLdivergence regularized RL using latent variables and induces semantically meaningful representations. The separation of concerns between high-level and low-level policy is guided by information asymmetry theory. Transfer of ing structure can solve or speed up training of new tasks or different agents. present HIRO, an off-policy HRL method with two levels of hierarchy. The non-stationary signal of the upper policy is mitigated via off-policy corrections, while the lower control policy benefits from densely shaped rewards. propose an extension of HIRO, which we call HIRO-LR, by learning a representation space from environment images, replacing the state and subgoal space with neural representations. While HIRO-LR can generalize to a flipped environment, it needs to be retrained, as only the learned space representation generalizes. Contrarily, HiDe generalizes without retraining. introduce Hierarchical Actor-Critic (HAC), an approach that can jointly learn multiple policies in parallel. The policies are trained in sparse reward environments via different hindsight techniques. HAC, HIRO and HIRO-LR consist of a set of nested policies where the goal of a policy is provided by the top layer. In this setting the goal and state space of the lower policy is identical to the action space of the upper policy. This necessitates sharing of the state space across layers. Overcoming this limitation, we introduce a modular design to decouple the functionality of individual layers. This allows us to define different state, action and goal spaces for each layer. Global information about the environment is only available to the planning layer, while lower levels only receive information that is specific to the respective layer. Furthermore, HAC and HIRO have a state space that includes the agent's position and the goal position, while and our method both have access to global information in the form of a top-down view image. In model-based reinforcement learning much attention has been given to learning of a dynamics model of the environment and subsequent planning (; ;). propose a planning method that performs a graph search over the replay buffer. However, they require to spawn the agent at different locations in the environment and let it learn a distance function in order to build the search graph. Unlike model-based RL, we do not learn state transitions explicitly. Instead, we learn a spatial value map from collected rewards. Recently, differentiable planning modules that can be trained via model-free reinforcement learning have been proposed (; ; ;). establish a connection between convolutional neural networks and Value Iteration . They propose Value Iteration Networks (VIN), an approach where modelfree RL policies are additionally conditioned on a fully differrentiable planning module. MVProp extends this work by making it more parameter-efficient and generalizable. The planning layer in our approach is based on MVProp, however contrary to prior work we do not rely on a fixed neighborhood mask to sequentially provide actions in its vicinity in order to reach a goal. Instead we propose to learn an attention mask which is used to generate intermediate goals for the underlying layers. learn a map of indoor spaces and planning on it using a multi-scale VIN. In their setting, the policy is learned from expert actions using supervised learning. Moreover, the robot operates on discrete set of high level macro actions. propose Universal Planning Networks (UPN), which learn how to plan an optimal action trajectory via a latent space representation. In contrast to our approach, the method relies on expert demonstrations and transfer to harder tasks can only be achieved after retraining. We model a Markov Decision Process (MDP) augmented with a set of goals G. We define the MDP as a tuple M = {S, A, G, R, T, ρ 0, γ}, where S and A are set of states and actions, respectively, R t = r(s t, a t, g t) a reward function, γ a discount factor ∈, T = p(s t+1 |s t, a t) the transition dynamics of the environment and ρ 0 = p(s 1) the initial state distribution, with s t ∈ S and a t ∈ A. Each episode is initialized with a goal g ∈ G and an initial state is sampled from ρ 0. We aim to find a policy π: S × G → A, which maximizes the expected return. We train our policies by using an actor-critic framework where the goal augmented action-value function is defined as: The Q-function (critic) and the policy π (actor) are approximated by using neural networks with parameters θ Q and θ π. The objective for θ Q minimizes the loss:, where The policy parameters θ π are trained to maximize the Q-value: To address the issue of sparse rewards, we utilize Hindsight Experience Replay (HER) , a technique to improve sample-efficiency in training goal-conditioned environments. The insight is that the desired goals of transitions stored in the replay buffer can be relabeled as states that were achieved in hindsight. Such data augmentation allows learning from failed episodes, which may generalize enough to solve the intended goal. apply two hindsight techniques to address the challenges introduced by the non-stationary nature of hierarchical policies and the environments with sparse rewards. In order to train a policy π i, optimal behavior of the lower-level policy is simulated by hindsight action transitions. More specifically, the action a i is replaced with a state s i−1 that is actually achieved by the lower-level policy π i−1. Identically to HER, hindsight goal transitions replace the subgoal g i−1 with an achieved state s i−1, which consequently assigns a reward to the lower-level policy π i−1 for achieving the virtual subgoal. Additionally, a third technique called subgoal testing is proposed. The incentive of subgoal testing is to help a higher-level policy understand the current capability of a lower-level policy and to learn Q-values for subgoal actions that are out of reach. We find both techniques effective and apply them to our model during training. propose differentiable value iteration networks (VIN) for path planning and navigation problems. propose value propagation networks (MVProp) with better sample efficiency and generalization behavior. MVProp creates reward-and propagation maps covering the environment. The reward map highlights the goal location and the propagation map determines the propagation factor of values through a particular location. The reward map is an imager i,j of the same size as the environment image I, wherer i,j = 0 if the pixel (i, j) overlaps with the goal position and −1 otherwise. The value map V is calculated by unrolling max-pooling operations in a neighborhood N for k steps as follows: Figure 3: Planner layer π 2 (s xy, G, I). Given the top-view environment image I and goal G on the map, the maximum value propagation network (MVProp) calculates a value map V. By using the agent's current position s xy, we estimate an attention mask M restricting the global value map V to a local and reachable subgoal mapV. The policy π 2 selects the coordinates with maximum value and assigns the lower policy π 1 with a sugboal that is relative to the agent's current position. The action (i.e., the target position) is selected to be the pixels (i, j) maximizing the value in a predefined 3x3 neighborhood N (i 0, j 0) of the agent's current position (i 0, j 0): Note that the window N (i 0, j 0) is determined by the discrete, pixel-wise actions. We introduce a novel hierarchical architecture, HiDe, allowing for an explicit functional decomposition across layers. Similar to HAC , our method achieves temporal abstractions via nested policies. Moreover, our architecture enables functional decomposition explicitly. This is achieved by nesting i) an abstract planning layer, followed ii) by a local planer to iii) guide a control component. Crucially, only the top layer receives global information and is responsible for planning a trajectory towards a goal. The lowest layer learns a control policy for agent locomotion. The middle layer converts the planning layer output into subgoals for the control layer. Achieving functional decoupling across layers crucially depends on reducing the state in each layer to the information that is relevant to its specific task. This design significantly improves generalization (see Section 5). The highest layer of a hierarchical architecture is expected to learn high-level actions over a longer horizon, which define a coarse trajectory in navigation-based tasks. In the related work (; ;, the planning layer, learning an implicit value function, shares the same architecture as lower layers. Since the task is learned for a specific environment, limits to generalization are inherent to this design choice. In contrast, we introduce a planning specific layer consisting of several components to learn the map and to find a feasible path to the goal. The planning layer is illustrated in figure 3 . We utilize a value propagation network (MVProp) to learn an explicit value map which projects the collected rewards onto the environment image. Given a top-down image of the environment, a convolutional network determines the per pixel flow probability p i,j. For example, the probability value of a pixel corresponding to a wall should be 0 and that for free passages 1 respectively. use a predefined 3 × 3 neighborhood of the agent's current position and pass the location of the maximum value in this neighbourhood as goal position to the agent (equation 5). We augment a MVProp network with an attention model which learns to define the neighborhood dynamically and adaptively. Given the value map V and the agent's current position s xy, we estimate how far the agent can go, modeled by a 2D Gaussian. More specifically, we predict a full covariance matrix Σ with the agent's global position s xy as mean. We later build a 2D mask M of the same size as the environment image I by using the likelihood function: Figure 4: A visual comparison of (left) our dynamic attention window with a (right) fixed neighborhood. The green dot corresponds to the selected subgoal in this case. Notice how our window is shaped so that it avoids the wall and induces a further subgoal. Intuitively, the mask defines the density for the agent's success rate. Our planner policy selects an action (i.e., subgoal) that maximizes the masked value map as follows: wherev i,j corresponds to the value at pixel (i, j) on the masked value mapV. Note that the subgoal selected by the planning layer g 2 is relative to the agent's current position s xy, which improves generalization performance of our model. The benefits of having an attention model are twofold. First, the planning layer considers the agent dynamics in assigning subgoals which may lead to fine-or coarse-grained subgoals depending on the underlying agent's performance. Second, the Gaussian window allows us to define a dynamic set of actions for the planner policy π 2, which is essential to find a trajectory of subgoals on the map. While the action space includes all pixels of the value map V, it is limited to the subset of only reachable pixels by the Gaussian mask M. Qualitatively we find this leads to better obstacle avoidance behaviour such as the corners and walls shown in figure 4. Since our planner layer operates in a discrete action space (i.e., pixels), the resolution of the projected maze image defines the minimum amount of displacement for the agent, affecting maneuverability. This could be tackled by using a soft-argmax to select the subgoal pixel, allowing to choose real-valued actions and providing in-variance to image resolution. In our experiments we see no difference in terms of the final performance. However, since the former setting allows for the use of DQN instead of DDPG , we prefer the discrete action space for simplicity and faster convergence. The middle layer in our hierarchy interfaces the high-level planning with low-level control by introducing an additional level of temporal abstraction. The planner's longer-term goals are further split into a number of shorter-term targets. Such refinement policy provides the lower-level control layer with reachable targets, which in return yields easier rewards and hence accelerated learning. The interface layer policy is the only layer that is not directly interacting with the environment. More specifically, the policy π 1 only receives the subgoal g 2 from the upper layer π 2 and chooses an action (i.e. subgoal g 1) for the lower-level locomotion layer π 0. Note that all the goal, state and action spaces of the policy π 1 are in 2D space. Contrary to , we use subgoals that are relative to the agent's position s xy. This helps to generalize and learn better. The lowest layer learns a goal-conditioned control policy. Due to our explicit functional decomposition, it is the only layer with access to the agent's internal state s joints including joint positions and velocities. Whereas the higher layers only have access to the agent's position. In a navigation task, the agent has to learn locomotion to reach the goal position. Similar to HAC, we use hindsight goal transition techniques so that the control policy receives rewards even in failure cases. All policies in our hierarchy are jointly-trained. We use the DDPG algorithm with the goal-augmented actor-critic framework (equation 2-3) for the control and interface layers, and DQN for the planning layer (see section 4.1). We evaluate our method on a series of simulated continuous control tasks in navigation-based environments 1. All environments are simulated in the MuJoCo physics engine . Experiment and implementation details are provided in the Appendix B. First, in section 5.1, we compare to various baseline methods. In section 5.2, we move to a new maze with a more complex design in order to show our model's generalization capabilities. Section 5.3 demonstrates that our approach indeed leads to functional decomposition by composing new agents via combining the planning layer of one agent with the locomotion layer of another. Finally, in section 5.4 we provide an ablation study for our design choices. We introduce the following task configurations: Maze Forward: the training environment in all experiments. The task is to reach a goal from a fixed pre-determined start position. Maze Backward: the training maze layout with swapped start and goal positions. Maze Flipped: a mirrored version of the training environment. Maze Random: a set of randomly generated mazes with random start and goal positions. In our experiments, we always train in the Maze Forward environment. The reward signal during training is constantly -1, unless the agent reaches the given goal (except for HIRO and HIRO-LR, see section 5.1). We test the agents on the above tasks with fixed starting and fixed goal position. For more details about the environments, we refer to Appendix A. We intend to answer the following two questions: 1) Can our method generalize to unseen test environments? 2) Is it possible to transfer the planning layer policies between agents? We compare our method to state-of-the-art approaches including HIRO , HIRO-LR , HAC and a modified version of HAC called Rel-HAC in a simple Maze Forward environment as shown in figure 6. For a fair comparison, we made a number of improvements to the HAC and HIRO implementations. For HAC, we introduced target networks and used the hindsight experience replay technique with the future strategy . In our experiments we observed that oscillations around the goal kept HIRO agents from finishing the task, which was solved via doubling the distance-threshold of success. HIRO-LR is the closest to our method, as it also receives a top-down view image of the environment. Note that both HIRO and HIRO-LR have access to dense negative distance reward, which is an advantage over HAC and HiDe that only receive a reward when finishing the task. We train a modified HAC model, dubbed RelHAC, to asses our planning layer. RelHAC has the same lowest and middle layers as HiDe, whereas the top layer has the same structure as the middle layer, therefore missing an effective planner. Preliminary experiments using fixed start and fixed goal positions during training for HAC, HIRO and HIRO-LR yielded 0 success rates in all cases. Therefore, the baseline models are trained by using fixed start and random goal positions, allowing it to receive a reward signal without having to reach the intended goal at the other end of the maze. Contrarily, HiDe is trained with fixed start and fixed goal positions, whereas HiDe-R represents HiDe under the same conditions as the baseline methods. All models learned this task successfully as shown in figure 5 and table 1 (Forward column). HIRO demonstrates slightly better convergence and final performance, which can be attributed to the fact that it is trained with dense rewards. RelHAC performs worse than HAC due to the pruned state space of each layer and due to the lack of an effective planner. HIRO-LR takes longer to converge because it has to learn a latent goal space representation. Table 1 summarizes the models' generalization abilities to the unseen Maze Backward and Maze Flipped environments (see figure 6). While HIRO, HIRO-LR and HAC manage to solve the training environment (Maze Forward) with success rates between 99% and 82%, they suffer from overfiting to the training environment, indicated by the 0% success rates in the unseen test environments. Contrarily, our method is able to achieve 54% and 69% success rates in this generalization task. As expected, training our model with random goal positions (i.e., HiDe-R) yields a more robust model outperforming vanilla HiDe. In subsequent experiments, we only report the for our method, as our experiments have shown that the baseline methods cannot solve the training task for more complex environments. In this experiment, we train an ant and a ball agent (see Appendix A.1) in the Maze Forward task with a more complex environment layout (cf. figure 1), while we keep both the start and goal positions intact. We then evaluate this model in 4 different tasks (see section 5). Table 2 reports success rates of both agents in this complex task. Our model successfully transfers its navigation skills to unseen environments. The performance for the Maze Backward and Maze Flipped tasks is similar to the shown in section 5.1 despite the increased difficulty. Since the randomly generated mazes are typically easier, our model shows similar or better performance. To demonstrate that the layers in our architecture indeed learn separate sub-tasks we transfer individual layers across different agents. We first train an agent without our planning layer, i.e., with RelHAC. We then replace the top layer of this agent with the planning layer from the models trained in section 5.2. Additionally, we train a humanoid agent and show as a proof of concept that transfer to a very complex agent can be achieved. We carry out two sets of experiments. First, we transfer the ant model's planning layer to the simpler 2 DoF ball agent. As indicated in Table 3, the performance of the ball with the ant's planning layer matches the in Table 2. The ball agent's success rate increases for random (from 96% to 100%) and forward (96% to 97%) maze tasks whereas it decreases slightly in the backward (from 100% to 90%) and flipped (from 99% to 88%) configurations. HiDe-A 0 ± 0 0 ± 0 0 ± 0 HiDe-AR 95 ± 1 52 ± 33 34 ± 45 Table 4: Success rates in the simple maze. HiDe-A is our method with absolute subgoals. HiDe-AR has absolute goals and samples random goals during training. HiDe-A 0 ± 0 0 ± 0 0 ± 0 0 ± 0 HiDe-AR 0 ± 0 0 ± 0 0 ± 0 0 ± 0 HiDe-NI 10 ± 5 46 ± 16 0 ± 0 3 ± 4 Table 5: Success rates of achieving a goal in the complex maze environment. HiDe-A and HiDe-AR as in Table 4. HiDe-NI is our method without the inferface layer. Second, we attach the ball agent's planning layer to the more complex ant agent. Our new compositional agent performs marginally better or worse in the Flipped, Random and Backward tasks. Please note that this experiment is an example of a case where the environment is first learned with a fast and easy-to-train agent (i.e., ball) and then utilized by a more complex agent. We hereby show that transfer of layers between agents is possible and therefore find our hypothesis to be valid. Moreover, an estimate indicates that the training is roughly 3 -4 times faster, since the complex agent does not have to learn the planning layer. To demonstrate our method's transfer capabilities, we train a humanoid agent (17 DoF) in an empty environment with shaped rewards. We then use the planning and interface layer from a ball agent and connect it as is with the locomotion layer of the trained humanoid 2. To support the claim that our architectural design choices lead to better generalization and functional decomposition, we compare empirical for different variants of our method with an ant agent. First, we compare the performance of relative and the absolute positions for both experiment 1 and experiment 2. For this reason, we train HiDe-A and HiDe-AR, the corresponding variants of HiDe and HiDe-R that use absolute positions. Unlike for relative positions, the policy needs to learn all values within the range of the environment dimensions. Second, we compare HiDe against a variant of HiDe without the interface layer called HiDe-NI. The for experiment 1 are in Table 4. HiDe-A does not manage to solve the task at all, similar to HAC and HIRO without random goal sampling. HiDe-AR succeeds in solving the Forward task. However, it generalizes worse than both Hide and HiDe-R in the Backward and Flipped task. Both HiDe-A and HiDe-AR fail to solve the complex maze for experiment 2 as shown in the Table 5. These indicate that 1) relative positions improve performance and are an important aspect of our method to achieve generalization to other environments and 2) random goal position sampling can help agents, but may not be available depending on the environment. As seen in Table 5, the variant of HiDe without interface layer (HiDe-NI) performs worse than HiDe (cf. Table 2) in all experiments. Thus, the interface layer is an important part of our architecture. We also run an ablation study for HiDe with a fixed window size. More specifically, we train and evaluate an ant agent on window sizes 3×3, 5×5, and 9×9. The are included in Tables 12,13, and 14. The learned attention window (HiDe) achieves better or comparable performance. In all cases, HiDe generalizes better in the Backward complex maze. Moreover, the learned attention eliminates the need for tuning the window size hyperparameter per agent and environment. In this paper, we introduce a novel HRL architecture that can solve complex navigation tasks in 3D-based maze environments. The architecture consists of a planning layer which learns an explicit value map and is connected with a subgoal refinement layer and a low-level control layer. The framework can be trained end-to-end. While training with a fixed starting and goal position, our method is able to generalize to previously unseen settings and environments. Furthermore, we demonstrate that transfer of planners between different agents can be achieved, enabling us to transfer a planner trained with a simplistic agent such as a ball to a more complex agent such as an ant or humanoid. In future work, we want to consider integration of a more general planner that is not restricted to navigation-based environments. We build on the Mujoco environments used in. All environments use dt = 0.02. Each episode in experiment 1 is terminated after 500 steps and after 800 steps in the rest of the experiments or after the goal in reached. All rewards are sparse as in , i.e., 0 for reaching the goal and −1 otherwise. We consider goal reached if |s − g| max < 1. Since HIRO sets the goals in the far distance to encourage the lower layer to move to the goal faster, it can't stay exactly at the target position. Moreover, they do not terminate the episode after the goal is reached. Thus for HIRO, we consider a goal reached if |s − g| 2 < 2.5. Our Ant agent is equivalent to the one in. In other words, the Ant from Rllab with gear power of 16 instead of 150 and 10 frame skip instead of 5. Our Ball agent is the PointMass agent from DM Control Suite . We made the change the joints so that the ball rolls instead of sliding. Furthermore, we resize the motor gear and the ball itself to match the maze size. All mazes are modelled by immovable blocks of size 4 × 4 × 4. uses blocks of 8 × 8 × 8. The environment shapes are clearly depicted in figure 1. For the randomly generated maze, we sample each block with probability being empty p = 0.8,start and goal positions are also sampled randomly at uniform. Mazes where start and goal positions are adjacent or where goal is not reachable are discarded. For the evaluation, we generated 500 of such environments and reused them (one per episode) for all experiments. Our PyTorch implementation will be available at the project website. For both HIRO and HAC we used the original authors implementation 45. In HIRO, we set the goal success radius for evaluation as described above. We ran the hiro xy variant, which uses only position coordinates for subgoal instead of all joint positions, to have a fair comparison with our method. To improve the performance of HAC in experiment one, we modified their Hindsight Experience Replay implementation so that they use FUTURE strategy. More importantly, we also added target networks to both the actor and critic to improve the performance. For evaluation, we trained 5 seeds each for 2.5M steps on the Forward environment with continuous evaluation (every 100 episodes for 100 episodes). After training, we selected the best checkpoint based on the continuous evaluation of each seed. Then, we tested the learned policies for 500 episodes and reported the average success rate. Although the agent and goal positions are fixed, the initial joint positions and velocities are sampled from uniform distribution as standard in OpenAI Gym environments. Therefore, the tables in the paper contain means and standard deviation across 5 seeds. B.3 NETWORK STRUCTURE B.3.1 PLANNING LAYER Input images for the planning layer were binnarized in the following way: each pixel corresponds to one block (0 if it was a wall or 1 if it was a corridor). In our planning layer, we process the input image of size 32x32 (20x20 for experiment 1) via two convolutional layers with 3 × 3 kernels. Both layers have only 1 input and output channel and are padded so that the output size is the same as the input size. We propagate the value through the value map as in K = 35 times using a 3 × 3 max pooling layer. Finally, the value map and agent position image (a black image with a dot at agent position) is processed by 3 convolutions with 32 output channels and 3 × 3 filter window interleaved by 2 × 2 max pool with ReLU activation functions and zero padding. The final is flatten and processed by two fully connected layers with 64 neurons each producing three outputs: σ 1, σ 2, ρ with softplus, softplus and tanh activation functions respectively. The final covariance matrix Σ is given by, so that the matrix is always symmetric and positive definite. For numerical reasons, we multiply by binnarized kernel mask instead of the actual Gaussian densities. We set values higher than mean to 1 and others to zeros. In practice, we use this line: kernel = t.where(kernel >= kernel.mean(dim=, keepdim=True), t.ones_like(kernel), t.zeros_like(kernel)) We use the same network architecture for the middle and lower layer as proposed by , i.e. we use 3 times fully connected layer with ReLU activation function. The locomation layer is activated with tanh, which is then scaled to the action range. • Discount γ = 0.98 for all agents. • Adam optimizer. Learning rate 0.001 for all actors and critics. • Soft updates using moving average; τ = 0.05 for all controllers. • Replay buffer size was designed to store 500 episodes, similarly as in • We performed 40 actor and critic learning updates after each epoch on each layer, after the replay buffer contained at least 256 transitions. • Batch size 1024. • No gradient clipping • Rewards 0 and -1 without any normalization. • Subgoal testing only for the middle layer. • Maximum subgoal horizon H = 10 for all 3 layers algorithms and H = 25 for ablations without the inferace layer. See psuedocode 1. • Observations also were not normalized. • 2 HER transitions per transition using the FUTURE strategy . • Exploration noise: 0.05, 0.01 and 0.1 for the planning, middle and locomotion layer respectively. In this section, we present all collected for this paper including individual runs.
Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks
960
scitldr
Sound correspondence patterns play a crucial role for linguistic reconstruction. Linguists use them to prove language relationship, to reconstruct proto-forms, and for classical phylogenetic reconstruction based on shared innovations. Cognate words which fail to conform with expected patterns can further point to various kinds of exceptions in sound change, such as analogy or assimilation of frequent words. Here we present an automatic method for the inference of sound correspondence patterns across multiple languages based on a network approach. The core idea is to represent all columns in aligned cognate sets as nodes in a network with edges representing the degree of compatibility between the nodes. The task of inferring all compatible correspondence sets can then be handled as the well-known minimum clique cover problem in graph theory, which essentially seeks to split the graph into the smallest number of cliques in which each node is represented by exactly one clique. The ing partitions represent all correspondence patterns which can be inferred for a given dataset. By excluding those patterns which occur in only a few cognate sets, the core of regularly recurring sound correspondences can be inferred. Based on this idea, the paper presents a method for automatic correspondence pattern recognition, which is implemented as part of a Python library which supplements the paper. To illustrate the usefulness of the method, various tests are presented, and concrete examples of the output of the method are provided. In addition to the source code, the study is supplemented by a short interactive tutorial that illustrates how to use the new method and how to inspect its . One of the fundamental insights of early historical linguistic research was that -as a of systemic changes in the sound system of languages -genetically related languages exhibit structural similarities in those parts of their lexicon which were commonly inherited from their ancestral languages. These similarities surface in form of correspondence relations between sounds from different languages in cognate words. English th [θ], for example, is usually reflected as d in German, as we can see from cognate pairs like English thou vs. German du, or English thorn and German Dorn. English t, on the other hand, is usually reflected as z [ts] in German, as we can see from pairs like English toe vs. German Zeh, or English tooth vs. German Zahn. The identification of these regular sound correspondences plays a crucial role in historical language comparison, serving not only as the basis for the proof of genetic relationship BID11 BID6 or the reconstruction of protoforms BID20, 72-85, Anttila 1972, but (indirectly) also for classical subgrouping based on shared innovations (which would not be possible without identified correspondence patterns).Given the increasing application of automatic methods in historical linguistics after the "quantitative turn" (Geisler and List 2013, 111) in the beginning of this millennium, scholars have repeatedly attempted to either directly infer regular sound correspondences across genetically related languages BID30 BID29 BID5 BID26 or integrated the inference into workflows for automatic cognate detection BID17 BID33 BID34 BID40. What is interesting in this context, however, is that almost all approaches dealing with regular sound correspondences, be it early formal -but classically grounded -accounts BID16 BID20 or computer-based methods BID29 BID28 BID34 only consider sound correspondences between pairs of languages. A rare exception can be found in the work of Anttila, who presents the search for regular sound correspondences across multiple languages as the basic technique underlying the comparative method for historical language comparison. Anttila's description starts from a set of cognate word forms (or morphemes) across the languages under investigation. These words are then arranged in such a way that corresponding sounds in all words are placed into the same column of a matrix. The extraction of regularly recurring sound correspondences in the languages under investigation is then based on the identification of similar patterns recurring across different columns within the cognate sets. The procedure is illustrated in Figure 1, where four cognate sets in Sanskrit, Ancient Greek, Latin, and Gothic are shown, two taken from Anttila and two added by me. Two points are remarkable about Anttila's approach. First, it builds heavily on the phonetic alignment of sound sequences, a concept that was only recently adapted in linguistics (Covington 1996; BID27 BID34, building heavily on approaches in bioinformatics and computer science BID55 BID45, although it was implicitly always an integral part of the methodology of historical language comparison (compare Fox 1995, 67f, Dixon and BID9 . Second, it reflects a concrete technique by which regular sound correspondences for multiple languages can be detected and employed as a starting point for linguistic reconstruction. If we look at the framed columns in the four examples in Figure 1, which are further labeled alphabetically, for example, we can easily see that the patterns A, E, and F are remarkably similar, with the missing reflexes in Gothic in the patterns E and F as the only difference. The same holds, however, for columns C, E, and F. Since A and C differ regarding the reflex sound of Gothic (u vs. au), they cannot be assigned to the same correspondence set at this stage, and if we want to solve the problem of finding the regular sound correspondences for the words in the figure, we need to make a decision which columns in the alignments we assign to the same correspondence sets, thereby'imputing' missing sounds where we miss a reflex. Assuming that the "regular" pattern in our case is reflected by the group of A, E, and F, we can make predictions about the sounds missing in Gothic in E and F, concluding that, if ever we find the missing reflex in so far unrecognised sources of Gothic in the future, we would expect a -u-in the words for'daughter-in-law' and'red'.We can easily see how patterns of sound correspondences across multiple languages can serve as the basis for linguistic reconstruction. Strictly speaking, if two alignment columns are identical (ignoring missing data to some extent), they need to reflect the same proto-sound. But even if they are not identical, they could be assigned to the same proto-sound, provided that one can show that the differences are conditioned by phonetic context. This is the case for Gothic au [o] in pattern C, which has been shown to go back to u when preceding h (Meier-Brügger 2002, 210f). As a , scholars usually reconstruct Proto-Indo-European *u for A, C, E, and F. Regular sound correspondences across four Indo-European languages, illustrated with help of alignments along the lines of Anttila (1972: 246). In contrast to the original illustration, lost sounds are displayed with help of the dash "-" as a gap symbol, while missing words (where no reflex in Gothic or Latin could be found) are represented by the "Ø" symbol. While it seems trivial to identify sound correspondences across multiple languages from the few examples provided in Figure 1, the problem can become quite complicated if we add more cognate sets and languages to the comparative sample. Especially the handling of missing reflexes for a given cognate set becomes a problem here, as missing data makes it difficult for linguists to decide which alignment columns to group with each other. This can already be seen from the examples given in Figure 1, where we have two possibilities to group the patterns A, C, E, and F.The goal of this paper is to illustrate how a manual analysis in the spirit of Anttila can be automatized and fruitfully applied -not only in purely computational approaches to historical linguistics, but also in computer-assisted frameworks that help linguists to explore their data before they start carrying out painstaking qualitative comparisons BID37. In order to illustrate how this problem can be solved computationally, I will first discuss some important general aspects of sound correspondences and sound correspondence patterns in Section 2, introducing specific terminology that will be needed in the remainder. In Section 3, I will show that the problem of finding sound correspondences across multiple languages can be modeled as the well-known clique-cover problem in an undirected network BID3. While this problem is hard to solve in an exact way computationally, 2 fast approximate solutions exist BID57 and can be easily applied. Based on these findings, I will introduce a fully automated method for the recognition of sound correspondence patterns across multiple languages in Section 4. This method is implemented in form of a Python library and can be readily applied to multilingual wordlist data as it is also required by software packages such as LingPy or software tools such as EDICTOR BID38. In Section 5, I will then illustrate how the method can be applied and evaluate its performance both qualitatively and quantitatively. The application of the new method is further explained in an accompanying interactive tutorial available from the supplementary material, which also shows how an extended version of the EDICTOR interface can be used to inspect the inferred correspondence patterns interactively. The supplementary material also provides code and data as well as instructions on how to replicate all tests carried out in this study. In the introduction, I have tried to emphasize that the comparative method is itself less concerned with regular sound correspondences attested for language pairs, but for all languages under consideration. In the following, I want to substantiate this claim further, while at the same time introducing some major methodological considerations and ideas which are important for the development of the new method for sound correspondence pattern recognition that I want to introduce. Sound correspondences are most easily defined for pairs of languages. Proto The more languages we add to the sample, however, the more complex the picture will get, and while we can state three (basic) patterns for the case of English, German, and Dutch, given in our example, we may get easily more patterns, due to secondary sound changes in the different languages, although we would still reconstruct only three sounds in the proto-language ([θ, t, d] ). Thus, there is a one-to-n relationship between what we interpret as a proto-sound of the proto-language, and the regular correspondence patterns which we may find in our data. While we will reserve the term sound correspondence for pairwise language comparison, we will use the term sound correspondence pattern (or simply correspondence pattern) for the abstract notion of regular sound correspondences across a set of languages which we can find in the data. If the words upon which we base our inference of correspondence patterns are strictly cognate (i.e., they have not been borrowed and not undergone "irregular" changes like assimilation or analogy), a given correspondence pattern points directly to a proto-sound in the ancestral language. A given proto-sound, however, may be reflected in more than one correspondence pattern, which can be ideally resolved by inferring the phonetic context that conditions the change from the proto-language to individual descendants. DISPLAYFORM0 Scholars like Meillet have stated that the core of historical linguistics is not linguistic reconstruction, but the inference of correspondence patterns, emphasizing that'reconstructions are nothing else but the signs by which one points to the correspondences in short form'.3 However, given the one-to-n relation between proto-sounds and correspondence patterns, it is clear, that this is not quite correct. Having inferred regular correspondence patterns in our data, our reconstructions will add a different level of analysis by further clustering these patterns into groups which we believe to reflect one single sound in the ancestral language. That there are usually more than just one correspondence pattern for a reconstructed proto-sound is nothing new to most practitioners of linguistic reconstruction. Unfortunately, however, linguists do rarely list all possible correspondence patterns exhaustively when presenting their reconstructions, but instead select the most frequent ones, leaving the explanation of weird or unexpected patterns to comments written in prose. A first and important step of making a linguistic reconstruction system transparent, however, should start from an exhaustive listing of all correspondence patterns, including irregular patterns which occur very infrequently but would still be accepted by the scholars as reflecting true cognate words. What scholars do instead is providing tables which summarise the correspondence patterns in a rough form, e.g., by showing the reflexes of a given proto-sound in the descendant languages in a table, where multiple reflexes for one and the same language are put in the same cell. An example, taken with modifications 4 from Clackson, is given in Table 2. In this table, the major reflexes of Proto-Indo-European stops in 11 languages representing the oldest attestations and major branches of Indo-European, are listed. This table is a very typical example for the way in which scholars discuss, propose, and present correspondence patterns in linguistic reconstruction BID4 BID21 BID24 BID2. The shortcomings of this representation become immediately transparent. Neither are we told about the frequency by which a given reflex is attested to occur in the descendant languages, nor are we told about the specific phonetic conditions which have been proposed to trigger the change where we have two reflexes for the same proto-sound. While scholars of Indo-European tend to know these conditions by heart, it is perfectly understandable why they would not list them. However, when presenting the to outsiders to their field in this form, it makes it quite difficult for them to correctly evaluate the findings. A sound correspondence table may look impressive, but it is of no use to people who have not studied the data themselves. Table 2 Sound correspondence patterns for Indo-European stops, following Clackson. A further problem in the field of linguistic reconstruction is that scholars barely discuss workflows or procedures by which sound correspondence patterns can be inferred. For well-investigated language families like Indo-European or Austronesian, which have been thoroughly studied for hundreds of years, it is clear that there is no direct need to propose a heuristic procedure, given that the major patterns have been identified long ago and the research has reached a stage where scholarly discussions circle around individual etymologies or higher levels of linguistic reconstruction, like semantics, morphology and syntax.5 For languages whose history is less well known and where historical language reconstruction has not even reached a stage of reconstruction where a majority of scholars agrees, however, a procedure that helps to identify the major correspondence patterns underlying a given dataset, would surely be incredibly valuable. In order to infer correspondence patterns, the data must be available in aligned form (for details on alignments, see List 2014, 61-118), that is, we must know which of the sound segments that we compare across cognate sets are assumed to go back to the same ancestral segment. This is illustrated in Figure 2 where the cognate sets from Table 1 are presented in aligned form, following the alignment annotations of LingPy and EDICTOR BID38, in representing zero-matches with the dash ("-") as a gap symbol, and using brackets to indicate unalignable parts in the sequences. Scholars at times object to this claim, but it should be evident, also from reading the account by BID0 mentioned above, that without alignment analyses, albeit implicit ones that are never provided in concrete, no correspondence patterns could be proposed. Even if alignments are never mentioned in the entire book of BID7, the correspondence patterns shown in Table 2 directly reflect them, since each example that one could give for the data underlying a given correspondence pattern in the descendant languages would require the identification of unique sounds in each of the reflexes that confirm this pattern. DISPLAYFORM0 Figure 2 Alignment analyses of the six cognate sets from Table 1. Brackets around subsequences indicate that the alignments cannot be fully resolved due to secondary morphological changes. It is important to keep in mind that strict alignments can only be made of cognate words (or parts of cognate words) that are directly related. The notion of directly related word (parts) is close to the notion of orthologs in evolutionary biology BID36 ) and refers to words or word parts whose development have not been influenced by secondary changes due to morphological processes.6 If we compare German gehen [geː.ən]'to go' with English go [gəʊ], for example, it would be useless to align the verb ending -en in German with two gap characters in English, since we know well that English lost most of its verb endings independently. We can, however, align the initial sound and the main vowel. Following evolutionary biology, a given column of an alignment is called an alignment site (or simply a site). An alignment site may reflect the same values as we find in a correspondence pattern, and correspondence patterns are usually derived from alignment sites, but in contrast to a correspondence pattern, an alignment site may reflect a correspondence pattern only incompletely, due to missing data in one or more of the languages under investigation. For example, when comparing German Dorf [dɔrf]'village' with Dutch dorp [dɔrp], it is immediately clear that the initial sounds of both words represent the same correspondence pattern as we find for the cognate sets for'thick' and'thorn' given in Figure 2, although no reflex of their Proto-Germanic ancestor form *þurpa-(originally meaning 'crowd', see Kroonen 2013, 553) has survived in Modern English.7 Thanks to the correspondence patterns in Table 1, however, we know thatif we project the word back to Proto-Germanic -we must reconstruct the initial with *þ-'[θ], since the match of German d-and Dutch d-only occurs -if we ignore recent borrowings -only in correspondence patterns in which English has th-.These "gaps" due to missing reflexes of a given cognate set are not the same as the gaps inside an alignment, since the latter are due to the (regular) loss or gain of a sound segment in a given alignment site, while gaps due to missing reflexes may either reflect processes of lexical replacement (List 2014, 37f), or a preliminary stage of research ing from insufficient data collections or insufficient search for potential reflexes. While I follow the LingPy annotation for gaps in alignments by using the dash as a symbol for gaps in alignment sites, I will use the character Ø (denoting the empty set) to represent missing data in correspondence patterns and alignment sites. The relation between correspondence patterns in the sense developed here and alignment sites is illustrated in FIG1, where the initial alignment sites of three alignments corresponding to Proto-Germanic þ [θ] are assembled to form one correspondence pattern. DISPLAYFORM1'thorp' In this section, I have tried to introduce some basic terms, techniques, and concepts that help to set the scope for the new method for sound correspondence pattern recognition that will be presented in this paper. I first distinguished correspondence patterns from proto-forms, since one proto-form can represent multiple correspondence patterns in a given language family. I then distinguished correspondence patterns from concrete alignment sites in which the relations of concrete cognate words are displayed, by emphasizing that correspondence patterns can be seen as a more abstract analysis, in which similar alignment sites across different cognate sets, regardless of missing reflexes in the descendant languages, are assigned to the same correspondence pattern. In the next sections, I will try to show that this handling allows us to model the problem of sound correspondence pattern recognition as a network partitioning task. Before presenting the new method for automatic correspondence pattern recognition, it is important to introduce some basic thoughts about alignment sites and correspondence patterns that hopefully help to elucidate the core idea behind the method. Having established the notion of alignment site compatibility, I will show how alignment sites can be modelled with help of an alignment site network, from which we can extract regularly recurring sound correspondences. If we recall the problem we had in grouping the alignment sites E and F from Figure 1 with either A or C, we can see that the general problem of grouping alignment sites to correspondence patterns is their compatibility. If we had reflexes for all languages under investigation in all cognate sets, the compatibility would not be a problem, since we could simply group all identical sites with each other, and the task could be considered as solved. However, since it is rather an exception than the norm to have reflexes for all languages under consideration in a number of cognate sets, we will always find alternative possibilities to group our alignment sites in correspondence patterns. In the following, I will assume that two alignment sites are compatible, if they (a) share at least one sound which is not a gap symbol, and (b) do not have any conflicting sounds. We can further weight the compatibility by counting how many sounds are shared among two alignment sites. This is illustrated in FIG2 for our four alignment sites A, C, E, and F from Figure 1 above. As we can see from the figure, only two sites are incompatible, namely A and C, as they show different sounds for the reflexes in Gothic. Given that the reflex for Latin is missing in site C, we can further see that C shares only two sounds with E and F. DISPLAYFORM0 Having established the concept of alignment site compatibility in the previous section, it is straightforward to go a step further and model alignment sites in form of a network. Here, all sites in the data represent nodes (or vertices), and edges are only drawn between those nodes which are compatible, following the criterion of compatibility outlined in the previous section. We can further weight the edges in the alignment site network, for example, by using the number of matching sounds (where no missing data is encountered) to represent the strength of the connection (but we will disregard weighting in our method). FIG3 illustrates how an alignment site network can be created from the compatibility comparison shown in FIG2. As was mentioned already in the introduction, the main problem of assigning different alignment sites to correspondence patterns is to decide about those cases where one site could be assigned to more than one patterns. Having shown how the data can be modeled in form of a network, we can rephrase the task of identifying correspondence patterns as a network partitioning task with the goal to split the network into non-overlapping sets of nodes. Given that our main criterion for a valid correspondence pattern is full compatibility among all alignment sites of a given partition, we can further specify the task as a clique partitioning task. A clique in a network is'a maximal subset of the vertices [nodes] in an undirected network such that every member of the set is connected by an edge to every other' (Newman 2010, 193). Demanding that sound correspondence patterns should form a clique of compatible nodes in the network of alignment sites is directly reflecting the basic practice of historical language comparison as outlined by BID0, according to which a further grouping of incompatible alignment sites by proposing a proto-form would require us to identify a phonetic environment that could show incompatible sites to be complementary. Partitioning our alignment site network into cliques does therefore not solve the problem of linguistic reconstruction, but it can be seen as its fundamental prerequisite. It is difficult to find a linguistically valid criterion for the way in which the alignment site network should be partitioned into cliques of compatible nodes. Following a general reasoning along the lines of Occam's razor or general parsimony of explanation (Gauch 2003, 269-326), which is often frequented as a criterion for favoring one explanation over the other in historical language comparison, it is straightforward to state the problem of clique partitioning of alignment site networks as a minimum clique cover problem, i.e., the problem of identifying'the minimum number of cliques into which a graph can be partitioned' (Bhasker and Samad 1991, 2). This means, when partitioning our alignment site graph, we should try to minimize the number of cliques to which the different nodes are assigned. The minimum clique cover problem is a well-known problem in graph theory and computer science, although it is usually more prominently discussed in form of its inverse problem 8, the graph coloring problem, which tries to assign different colors to all nodes in a graph which are directly connected (Hetland 2010, 276). While the problem is generally known to be NP-hard (ibid.), fast approximate solutions like the Welsh-Powell algorithm BID57 are available. Using approximate solutions seems to be appropriate for the task of correspondence pattern recognition, given that we do not (yet) have formal linguistic criteria to favor one clique cover over another. We should furthermore bear in mind that an optimal resolution of sound correspondence patterns for linguistic purposes would additionally allow for uncertainty when it comes to assigning a given alignment site to a given sound correspondence pattern. If we decided, for example, that the pattern C in FIG3 could by no means cluster with E and F, this may well be premature before we have figured out whether the two patterns (u-u-u-u vs. u-u-u-au) are complementary and what phonetic environments explain their complementarity. The algorithm for correspondence pattern recognition, which will be presented in the next section, accounts for this by allowing one to propose fuzzy partitions in which alignment sites can be assigned to more than one correspondence pattern. In the following, I will introduce a method for automatic correspondence pattern recognition that takes cognate-coded and phonetically aligned multilingual wordlists as input and delivers a list of correspondence patterns as output, with each alignment site in the original data being assigned to at least one of the inferred correspondence patterns. The general workflow underlying the method for automatic correspondence pattern recognition can be divided into five different stages. Starting from a multilingual wordlist in which translations for a concept list are provided in form of phonetic transcriptions for the languages under investigation, the words in the same semantic slot are manually or automatically searched for cognates (A) and (again manually or automatically) phonetically aligned (B). The alignment sites are then used to construct an alignment site network in which edges are drawn between compatible sites (C). The alignment sites are then partitioned into distinct non-overlapping subsets using an approximate algorithm for the minimum clique cover problem (D). In a final step, potential correspondence patterns are extracted from the non-overlapping subsets, and all individual alignment sites are assigned to those patterns with which they are compatible (E). While there are both standard algorithms and annotation frameworks for stages (A) and (B), 9, the major contribution of this paper is to provide the algorithms for stages (C), (D), and (E). The workflow is further illustrated in Figure 6. In the following sections, I will provide more detailed explanations on the different stages. The method has been implemented as a Python package that can be used as a plugin for the LingPy library for quantitative tasks in historical linguistics. Users can either invoke the method from within Python scripts as part of their customised workflows, or from the command line. The supplementary material offers a short tutorial along with example data illustrating how the package can be used. The input format for the method described here generally follows the input format employed by LingPy. In general, this format is a tab-separated text file with the first row being reserved for the header, and the first column being reserved for a unique DISPLAYFORM0 Figure 6 General workflow of the method for automatic correspondence pattern recognition. Steps (A) and (B) may additionally be provided in manually corrected form from the input data.numerical identifier. The header specifies the entry types in the data. In LingPy, all analyses require certain entry types to be provided from the file, but the entry types can vary from method to method. Table 3 provides an example for the minimal data that needs to be provided to our method for automatic correspondence pattern recognition. In addition to the generally needed information on the identifier of each word (ID), on the language (DOCULECT), the concept or elicitation gloss (CONCEPT), the (not necessarily required) orthographic form (FORM), and the phonetic transcription provided in space-segmented form (TOKENS), the method requires information on the type of sound (consonant or vowel, STRUCTURE), 10 the cognate set (COGID), and the alignment (ALIGNMENT).The format employed by LingPy and the method presented in this study is very similar to the format specifications developed in the Cross-Linguistic Data Formats (CLDF) initiative, which seeks to render cross-linguistic data more comparable. The CLDF homepage (http://cldf.clld.org) offers more detailed information on the ideas behind the different columns mentioned above as part of the CLDF ontology. LingPy offers routines to convert to and from the format specifications of the CLDF initiative. The method offers different output formats, ranging from the LingPy wordlist format in which additional columns added to the original wordlist provide information on the inferred patterns, or in the form of tab-separated text files, in which the patterns are explicitly listed. The wordlist output can also be directly inspected in the EDICTOR tool, allowing for a convenient manual inspection of the inferred patterns. Table 3 Input format with the basic values needed to apply the method for automatic correspondence pattern recognition. Both the information in the column COGID (providing information on the cognacy) and the ALIGNMENT column (providing the segmented transcriptions in aligned form) can be automatically computed. 1 German tongue Zunge ts ʊ ŋ ə c v c 1 ts ʊ ŋ (ə) 2 English tongue tongue t ʌ ŋ c v c 1 t ʌ ŋ (-) 3 Dutch tongue tong t ɔ ŋ c v c 1 t ɔ ŋ (-) 4 German tooth Zahn ts aː n c v c 2 ts aː n -5 English tooth tooth t uː θ c v c 2 t uː -θ 6 Dutch tooth tand t ɑ n t c v c 2 t ɑ n t 7 German thick dick d ɪ k c v c 3 d ɪ Given that the method is implemented in form of a plugin for the LingPy library, all cognate detection and phonetic alignment methods offered in LingPy are also available for the approach and have been tested. Among automatic cognate detection methods, the users can select among the consonant-class matching approach BID54, simple cognate partitioning with help of the normalized edit distance BID32 or the Sound-Class-Based Alignment (SCA) method BID33, and enhanced cognate detection with help of the original LexStat method BID33 and its enhanced version, based on the Infomap network partitioning algorithm BID49, as proposed in BID40. In addition, when dealing with data which has been previously segmented morphologically, users can also employ LingPy's partial cognate detection method BID41. For phonetic alignments, LingPy offers two basic variants as part of the SCA method for multiple sequence alignments BID33, namely "classical" progressive alignment, and library-based alignment, inspired by the T-COFFEE algorithm for multiple sequence alignment in bioinformatics (Notredame, Higgins, and Heringa 2000). The automatic methods for cognate detection and phonetic alignments, however, are not necessarily needed in order to apply the automatic method for correspondence pattern recognition. Alternatively, users can prepare their data with help of the EDICTOR tool for creating, maintaining and publishing etymological data BID38, which allows users both to annotate cognates and alignments from scratch or to refine cognate sets and alignments that have been derived from automatic approaches. Users proficient in computing do not need to rely on the algorithms offered by LingPy. Given that the number of freely available algorithms for automatic cognate detection is steadily increasing BID25 BID1 BID48, users can design their personal workflows, as long as they manage to export the analyses into the input formats required by the new method for correspondence pattern recognition. The method for correspondence pattern recognition consists of three stages (C-E in our general workflow). It starts with the reconstruction of an alignment site network in which each node represents a unique alignment site, and links between alignments sites are drawn if the sites are compatible, following the criterion for site compatibility outlined in Section 3.1 (C). It then uses a greedy algorithm to compute an approximate minimal clique cover of the network (D). All partitions proposed in stage (D) qualify as potentially valid correspondence patterns of our data. But the individual alignment sites in a given dataset may as well be compatible with more than one correspondence pattern. For this reason, the method iterates again over all alignment sites in the data and checks with which of the correspondence patterns inferred in stage (D) they are compatible. This procedure yields a (potentially) fuzzy assignment of each alignment site to at least one but potentially more different sound correspondence patterns (E). By further weighting and sorting the fuzzy patterns to which a given site has been assigned, the number of fuzzy alignment sites can be further reduced. As mentioned above in Section 3.3, by modeling the alignment sites in the data as a network in which edges are drawn between compatible alignment sites, we can treat the problem of correspondence pattern recognition as a network partitioning task, or, more precisely, as a specific case of the clique cover problem. Given the experimental status of this research, where it is still not fully understood what qualifies as an optimal clique cover of an alignment site graph with respect to the problem of identifying regular sound correspondence patterns in historical linguistics, I decided to use a simple approximate solution for the clique cover problem. The advantage of this approach is that it is reasonably fast and can be easily applied to larger datasets. Once more data for training and testing becomes available, the basic framework introduced here can be easily extended by adding more sophisticated methods. The clique cover algorithm consists of two steps. In a first step, the data is sorted, using a customized variant of the Quicksort algorithm BID19, which seeks to sort patterns according to compatibility and similarity. By iterating over the sorted patterns, all compatible patterns are assigned to the same cluster in this first pass, which provides a first very rough partition of the network. While this procedure is by no means perfect, it has the advantage of detecting major signals in the data very quickly. For this reason, it has also been introduced into the web-based EDICTOR tool, where a more refined method addressing the clique cover problem could not be used, due to the typical limitations of JavaScript running on client-side. In a second step, an inverse version of the Welsh-Powell algorithm for graph coloring BID57 is employed. This algorithm starts from sorting all existing partitions by size, beginning with the largest partitions. It then consecutively compares the currently largest partition with all other partitions, merging those which are compatible with each other, and keeping the incompatible partitions in the queue. The algorithm stops, once all partitions have been visited and compared against the remaining partitions. In order to adjust the algorithm to the specific needs of correspondence pattern recognition in historical linguistics, I use a slightly modified version. The method starts by sorting all partitions (which were retrieved from the application of the sorting algorithm) in reverse order using the number of non-missing segments in the pattern and the density of the alignment sites assigned to the pattern as our criterion. The density of a given correspondence pattern and the alignment site matrix (showing all alignment sites compatible with the pattern) is calculated by dividing the number of cells with no missing data in the matrix by the total number of cells in the matrix (see Figure 7 for an example). The method then selects the first element of the sorted partitions and compares it against all the remaining partitions for compatibility as defined above. If the first partition is compatible with another partition, the two partitions are merged into one and the new partition is further compared with the remaining partitions. If the partition is not compatible, the incompatible partition is appended to a queue. Once all partitions have been checked for compatibility, the pattern that was checked against the remaining patterns is placed in the list, and the queue is sorted again according to the specific sort criteria. The procedure is repeated until all initial partitions have been checked against all others. Calculating the alignment site density of a given correspondence pattern. The density is calculated by dividing the number of cells in the alignment site matrix with no missing data by the total number of cells in the matrix. Figure 8 gives an artificial example that illustrates how the basic method infers the clique cover. Starting from the data in (A), the method assembles patterns A and B in (B) and computes their pattern, thereby retaining the non-missing data for each language in the pattern as the representative value. Having added C and D in this fashion in steps (C) and (D), the remaining three alignment sites, E-G are merged to form a new partition, accordingly, in steps (E) and (F).In this context, it is important to note that the originally selected pattern may change during the merge procedure, since missing spots can be filled by merging the pattern with a new alignment site. For this reason, it is possible that this procedure, when only carried out one time, may not in a true clique cover (in which all compatible alignment sites are merged). For this reason, the procedure is repeated several times (3 times is usually enough), until the ing partitioning of the alignment site graph represents a true clique cover. Obviously, this algorithm only approximates the clique cover problem. However, as we will see in Section 5, it works reasonably well, at least for the smaller datasets which were considered in the tests. In the final stage of assigning alignment sites to correspondence patterns, our method first assembles all correspondence patterns inferred from the greedy clique cover analysis and then iterates over all alignment sites, checking again whether they are compatible with a given pattern or not. Since alignment sites may suffer from missing data, their assignment is not always unambiguous. The example alignment from Figure 1, for example, would yield two general correspondence patterns, namely u-u-u-au vs. u-u-u-u. While the assignment of the alignment sites A and C in the figure would be unambiguous, DISPLAYFORM0 Example for the basic method to compute the clique cover of the data. (A) shows all alignment sites in the data. (B-D) show how the algorithm selects potential edges step by step in order to arrive at a first larger clique cover. (E-F) show how the second cover is inferred. In each step during which one new alignment site is added to a given pattern, the pattern is updated, filling empty spots. While there are two missing data points in (E), where only alignment sites E and F are merged, these are filled after adding G.the sites E and F would be assigned to both patterns, since, judging from the data, we could not tell what correspondence pattern they represent in the end. Given that the perspective on sound correspondences and sound correspondence patterns presented in this study does not have -at least to my best knowledge -predecessors in form of quantitative studies, it is difficult to come up with a direct test of the suitability of the approach. Since classical linguists have never discussed all correspondence patterns in their data exhaustively, we have no direct means to carry out an evaluation study into the performance of the new approach as compared to an expert-annotated gold standard. What can be done, however, is to test specific characteristics of the method by contrasting the findings when varying certain parameters, or by introducing certain distortions and testing how the method reacts to them. Last not least, we can also carry out a deep qualitative analysis of the by manually inspecting proposed correspondence patterns. Before looking into these aspects in more detail, however, it is useful to look at some general statistics and when applying the method to different datasets. Table 4 Basic statistics the test data to test the new method. The training data is listed in the appendix and was only used for initial trials when developing the method. For the tests, I use the benchmark database for automatic cognate detection compiled for the study of BID40. This database offers a training and a test set, consisting of six subsets each, with data from different subgroups of different language families. In general, the datasets are rather small, ranging from 5 to 43 language varieties and from 109 to 210 concepts with a moderate genetic diversity. For our purpose, small datasets of rather closely related languages are very useful, not only because it is easier to evaluate them manually, but also because we can rely on automated alignments when searching for sound correspondence patterns. Table 4 provides an overview of the datasets along with basic information regarding the original data sources, the number of languages, concepts, and cognate sets. I also introduce a new measure, which I call cognate density, which provides a rough estimate on the genetic diversity of a given dataset. The cognate density D can be calculated with help of the formula DISPLAYFORM0 where m is the number of concepts, n i is the number of words in concept slot m i, w ij is the j-th word in the i-th concept slot, and cognates(w ij) is the size of the cognate set to which w ij belongs. If the cognate density is high, this means that the words in the data tend to cluster in large cognate sets. If it is low, this means that many words are isolated. If no words in the data are cognate, the density is zero. The cognate density measure is potentially useful to inspect specific strengths and weaknesses of the method proposed here, and one should generally expect that the method will work better on datasets with a high cognate density, since datasets with low density will have many sparse cognate sets which will be difficult to assign consistently to unambiguous correspondence patterns. As a first test, the method was applied to the test data and some basic statistics were calculated. Since the datasets are cognate-coded, but not yet phonetically aligned, I computed phonetic alignments for all datasets using the SCA algorithm in LingPy's default settings, 11 before applying the correspondence pattern recognition method in three different versions, one inferring correspondence patterns from all alignment sites, regardless of whether they reflect a vowel or a consonant, one where only consonants are considered, and one where only sites containing vowels are compared. The of this analysis are summarized in Table 5, which lists the number of alignment sites (St.), the number of inferred correspondence patterns (Pt.), the number of unique (singleton) patterns which cover only one alignment site and cannot be assigned to any other pattern (Sg.) and the fuzziness of the patterns (Fz.), which is the average number of different patterns to which each individual site can be attached, for all three variants (all patterns, only consonants, and only vowels) for each of the six datasets. Table 5 General statistics on the patterns inferred from the test sets. What we can see from these is that the method seems to be successful in drastically reducing the number of alignment sites by assigning them to the same pattern. What is also evident, but not necessarily surprising, is the large proportion of unique patterns across all datasets. A further aspect worth mentioning is that, apart from the case of Bahnaric, the fuzziness of the assignment of alignment sites to the inferred correspondence patterns seems to be generally higher for vowels than for consonants. This is generally not surprising, as it is well known that sound correspondences among vowels are much more difficult to establish than for consonants. Correspondence patterns wich represent only one alignment site in the data can be regarded as irregular with respect to the datasets, as they do not offer enough evidence to conclude whether they are representative for the languages under investigation or not. Obviously, irregular correspondence patterns may arise for different reasons. Among these are errors in the data (e.g., ing from mistaken transcriptions), errors in the cognate judgments (simple lookalikes and undetected borrowings), errors in the alignments (assuming that correspondence patterns can only be inferred strictly by aligning the words in question), irregular sound change processes (especially assimilation of frequently recurring words, often triggered by morphological processes, but also cases like metathesis), analogy (in a broader sense, referring not only to inflectional paradigms, but also to more abstract interferences among word families in a given language), and missing data that renders regular sound change processes irregular (e.g., if scholars have not searched thoroughly enough for more examples, or if there is truly only one example left or available in the data).12 Given the multiple reasons by which singleton correspondence patterns can emerge, it is difficult to tell without inspecting the data in detail, what exactly they from. A potentially general problem, which can be easily tested, is that the alignments were carried out automatically, while the cognate sets were assigned manually. This may lead to considerable distortions since manual cognate coders that disregard alignments usually do not pay much attention to questions of partial cognacy or morphological differences among cognate words due to derivation processes. As a , any automatic alignment method applied to historically diverse cognate words will necessarily align parts which a human would simply exclude from the analysis. We can automatically approximate this analysis by taking only those sites of the alignments in the data into consideration in which the number of gaps does not exceed a certain threshold. A straightforward threshold excludes all alignment sites where the number of gaps is in the majority, compared to the frequency of any other character in the site. The advantage of this criterion is that it is built-in in LingPy's function for the computation of consensus sequences from phonetic alignments. Consensus sequences represent for each site of an alignment the most frequently recurring segment BID51. To exclude all sites in which gaps are most frequent, it is therefore enough to compute a consensus sequence for all alignments and disregard those sites for which the consensus yields a gap when carrying out the correspondence pattern recognition analysis. The of this analysis are shown in Table 6. As can be seen easily, the analysis in which alignment sites with a considerable number of gaps are excluded produces considerably lower proportions of singleton correspondence patterns for all six test sets. The fact that the number of alignment sites is also drastically reduced in all datasets further illustrates how important it may be to invest the time to manually align cognate sets and mark affixes as nonalignable parts. Table 6 Calculating correspondence patterns from alignment sites with a limited number of gaps. The last two columns contrast the proportions of singleton correspondence patterns in the original analysis reported in Table 5 above (Gappy) with the obtained for the refined analysis in which gappy alignment sites are excluded (Non-Gappy). In the previous section, I have mentioned different factors that may influence the correspondence pattern analysis. Although we lack gold standards against which the method could be compared, we can design experiments which mimic various challenges for the correspondence pattern recognition analysis. In the following, I will discuss three experiments in which the data is artificially modified in a controlled way in order to see how the method reacts to specific challenges. As a first experiment, let us consider cases of undetected borrowings in the data. While it is impossible to simulate borrowings realistically for the time being, we can use a simple workaround inspired by BID8 and tested on linguistic data in BID35. This approach consists in the "seeding" of false borrowings among a certain number of language pairs in the data. Our version of this approach takes a pre-selected number of donor-recipient pairs and a pre-selected number of events as input and then randomly selects language pairs and word pairs from the data. For each event, one word is transferred from the donor to the recipient, and both items are marked as cognate. If an original counterpart is missing in the recipient language, the empty slot is filled by adding the word from the donor language. In order to test the impact that the introduction of borrowings has on the analysis, I introduce a rough measure of cognate set regularity derived from the inferred correspondence patterns. This measure, which I call pattern regularity (PR) for convenience, uses the above-mentioned alignment site density scores for the correspondence patterns to which each alignment site in a given cognate set is attached and scores their regularity using a user-defined threshold. If less then half of all alignment sites are judged to be regular according to this procedure, the whole cognate set is assumed to be regular. If we encounter a cognate set in the data which is judged to be irregular according to this criterion, it is split up by assigning all words in the cognate sets to independent cognate sets. If a dataset is highly irregular, it will loose many cognate sets after applying this procedure, and accordingly, its cognate density will drop. By comparing the cognate density of the original dataset after applying the PR measure with a dataset that was distorted by artificial borrowings, it is possible to test the impact of undetected borrowings on the method directly. Table 7 presents the of this test. Based on tests with the training data, I set the PR threshold to 0.25 and ran 100 trials for each dataset, each time comparing the density in the original dataset and the dataset with the artificial borrowings for a controlled number of language pairs and a controlled number of borrowing events. The number of language pairs may seem rather high. This was intended, however, as I wanted to simulate spurious borrowings rather than intensive borrowings between only a few varieties (which would necessarily increase the pattern regularity). Based on the positive experience with the exclusion of gapped alignment sites, the same variant was used for these tests. As can be seen from the in the table, the cognate density drops for most datasets when applying the PR measure. The only exception is Uralic, where density increases after adding the borrowings. The only explanation I have for this behaviour at the moment is that it from the generally low cognate density of the dataset and the low phonetic diversity of the languages. If the languages are phonetically similar, borrowings do not surface as irregular correspondence patterns or cognate sets, and it is impossible to tell whether words have been regularly inherited or not. In the other cases, however, I am confident that the approach reflects the expected behaviour: if the data contains a considerable amount of undetected borrowings, this will disturb the correspondence patterns and decrease the pattern regularity of a dataset. Table 7 Comparing pattern regularity for artificially seeded borrowings in the data. The table contrasts the original density (Orig. Ds.) with the density after applying the pattern regularity measure (PR Ds.), both to the unmodified and the modified dataset. The last two columns show the number of languages pairs (Lg.) in which borrowings were introduced and the number of borrowing events (Ev.). Cognates. In addition to undetected borrowings, the data can also suffer from wrong cognate assignments independent of borrowing, be it due to lookalikes which were erroneously judged to be cognate, or due to simple errors ing from the annotation process. We can simulate these cases in a similar manner as was done with the seeding of artificial borrowings, by seeding erroneous words into the cognate sets in the data. In order to distinguish this experiment from the experiment on borrowings, but also to make it more challenging, I used LingPy's in-built method for word generation. This method takes a list of words as input and returns a generator (a Markov Chain) that generates new words from the input data with similar phonotactics. The method is by no means exact, employing a simple bigram model consisting of the original sound segment and a symbol indicating its prosodic position, following the prosodic model outlined in (List 2014, 119-134). For our purpose, however, it is sufficient, as we do not need the best possible model for the creation of pseudo-words, and the input data we can provide is in any case rather limited. Table 8 Comparing pattern regularity for artificially seeded neologisms in the data. The table contrasts the original density (Orig. D.) with the density after applying the pattern regularity measure (PR D.). The last two columns show the number of languages (L.) in which neologisms were introduced and the number of replacement events (Ev.). The of this second experiment are reported in Table 8. As can be seen from the table, the density drops at different degrees in all datasets except from Huon. We have to admit that we could not find an explanation for this outlier. All we can suspect is that the very simple syllable structure of the languages may in fact yield words which are very similar to the words they were supposed to replace. Why this would lead to a slight increase of cognate density, however, is still not entirely clear for us. Nevertheless, in the other cases we are confident that our method picks up correctly the signals of disturbance in the data. The more erroneously assigned cognate sets we find in a given dataset, the more difficult it will be to find regular correspondence patterns. Testing the Predictive Force of Correspondence Patterns. As a final experiment to be reported in this section, let us investigate the predictive force of correspondence patterns. Since the method for correspondence pattern recognition imputes missing data in its core, it can in theory also be used to predict how a given word should look in a given language if the reflex of the corresponding cognate set is missing. An example for the prediction of forms has been given above for the cognate set Dutch dorp and German Dorf. Since we know from Table 1 that the correspondence pattern of d in Dutch and German usually points to Proto-Germanic *þ, we can propose that the English reflex (which is missing im Modern English) would start with th, if it was still preserved.13 Since the method for correspondence pattern recognition assigns one or more correspondence patterns to each alignment site, even if the site has missing data for a certain number of languages, all that needs to be done in order to predict a missing entry is to look up the alignment pattern and check the value that is proposed for the given language variety. How well the correspondence patterns in a given dataset predict missing reflexes can again be tested in a straightforward way by artificially introducing missing reflexes into the datasets. To make sure that the reflexes which should be predicted are in fact predictable, it is important to restrict both the number of reflexes which are deleted from a given dataset, as well as to delete only those reflexes from the data which appear in cognate sets of a certain size. In this way, we can guarantee that the method has a fair chance to identify missing data. Following these considerations, the experiment was designed as follows: in 100 different trials, regular words from each dataset were excluded and the correspondence patterns were inferred from the modified datasets. The number of words to be excluded was automatically derived for each dataset by (a) selecting cognate sets whose size was at least half of the number of languages in the datasets, and (b) selecting one reflex of one third of the preselected cognate sets. As in some of the previous experiments, highly gapped sites were excluded from the analysis. The prediction rate per reflex was then computed by dividing the number of correctly predicted sites by the total number of sites for a given reflex. Given that the methods may assign one alignment site to more than one correspondence pattern, the number of correctly predicted sites was adjusted by taking the average number of correctly predicted sites when a fuzzy site was encountered. In order to learn more about the type of sounds which are best predicted by the method, the predictive force was computed not only for all sites, but also for vowels and consonants in separation. The of this experiment are provided in Table 9. As can be seen from the table, the prediction based on inferred correspondence patterns does not work overwhelmingly well, with only a small amount of the missing reflexes being correctly assigned. This does, however, not invalidate the method itself, but rather reflects the general problems Table 9 Predicting missing reflexes from the data. Column MSS shows the minimal size of cognate sets that were considered for the experiment. Column MR points to the number of reflexes which were excluded, Ds. provides the cognate density of the dataset, and Fz. the fuzziness of the assignment of patterns to alignment sites. In addition to the predictive force for all sites, consonants, and vowels, the density and the fuzziness of the alignment sites for each dataset are also reported.we encounter when working with datasets of limited size in historical linguistics. Since the datasets in the test and training data are all of a smaller size, ranging between 110 and 210 concepts only, it is not generally surprising that the prediction of missing reflexes based on previously inferred regular correspondence patterns cannot yield highest accuracy scores. That we are dealing with general regularity issues (of small wordlists or of sound change processes in general) is also reflected in the fact that the prediction rate for consonants is much higher than the one for vowels. Given the limited design space of vowels opposed to consonants, vowel change is much more prone to idiosyncratic behavior than consonant change. This is also reflected in the experiment on the predictive force of automatically inferred correspondence patterns. Inspecting the of the analyses in due detail would go largely beyond the scope of this paper. To illustrate, however, how the analysis can aid in practical work on linguistic reconstruction, I want to provide an example from the Chinese test set. The Chinese data has the advantage of offering quick access to Middle-Chinese reconstructions for most of the items. Since Middle Chinese is only partially reconstructed on the basis of historical language comparison, and mostly based on written sources, such as ancient rhyme books and rhyme tables BID2, the reconstructions are not entirely dependent on the modern dialect readings. In Table 10, I have listed all patterns inferred by the method for correspondence pattern recognition for a reduced number of dialects (one of each major subgroup), which can all be reconstructed to a dental stop in Middle Chinese (*t, *tʰ or *d). If we only inspect the first four patterns in the table, we can see that the MC *d corresponds to two distinct patterns (# 85 and #135). Sūzhōu (SZ), one of the dialects of the Wú group, which usually inherit the three-stop distinction of voiceless, aspirated, and voiced stops in Middle Chinese, shows voiced [d] as expected in both patterns, but Běijīng, Guǎngzhōu and Fúzhōu have contrastive outcomes in both patterns ([tʰ] Table 10 Contrasting inferred correspondence patterns with Middle Chinese reconstructions (MC) and tone patterns (MC Tones: P: píng (flat), S: shǎng (rising), Q: qù (falling), R: rù (stop coda)) for representative dialects of the major groups (Běijīng, Sūzhōu, Chángshā, Nánchāng, Měixiàn, Guǎngzhōu, Fúzhōu).devoicing in the three dialects.14 If we had no knowledge of Middle Chinese, it would be harder to understand that both patterns correspond to the same proto-sound, but once assembled in such a way, it would still be much easier for scholars to search for a conditioning context that allows them to assign the same proto-sound to the two patterns in questions. In pattern #197, we can easily see that Fúzhōu is showing an unexpected sound when comparing it with the other patterns in the table. If Fúzhōu had a [tʰ] instead of the [l], we could merge it with pattern #85. The conditioning context for the deviation, which can again be quickly found when inspecting the data more closely, is due to a weakening of syllable-initial sounds in non-initial syllables in Fúzhōu, which can easily be seen when comparing the compound Fúzhōu [suɔʔ⁴ lau⁵²]'stone' (lit. 'stone-head') vs. the word [tʰau⁵²]'head' in isolation. The same process can also be found in pattern #26, with the difference that the pattern corresponds to pattern #135, as the Middle Chinese words have one of the oblique tones. The reflex [s] in Méixiàn is irregular, though, ing from an erroneous cognate judgment that links Fúzhōu [liaʔ²³] with Méixiàn [sɛ⁴⁴]'to lick'. Although the final pattern looks irregular, given that it occurs only once, it can also be shown to be a variant of #85, since the reflex in Fúzhōu is again due to the weakening process, but this time ing in assimilation with the preceding nasal (compare Fúzhōu [seiŋ⁵² nau³¹]'the front (front side)' with additional tone sandhi).The example shows that, as far as the Middle Chinese dental stops are concerned, we do not find explicit exceptions in our data, but can rather see that multiple correspondence patterns for the same proto-sound may easily evolve. We can also see that a careful alignment and cognate annotation is crucial for the success of the method, but even if the cognate judgments are fine, but the data are sparse, the method may propose erroneous groupings. In contrast to manual work on linguistic reconstruction, where correspondence patterns are never regarded in the detail in which they are presented here, the method is a boost, especially in combination with tools for cognate annotation, like EDICTOR, to which we added a convenient way to inspect inferred correspondence patterns interactively. Since linguists can run the new method on their data and then directly inspect the consequences by browsing all correspondence patterns conveniently in the EDICTOR, the method makes it a lot easier for linguists to come up with first reconstructions or to identify problems in the data. In this study I have presented a new method for the inference of sound correspondence patterns in multi-lingual wordlists. Thanks to its integration with the LingPy software package, the methods can be applied both in the form of fully automated workflows where both cognate sets, alignments, and correspondence patterns are computed, or in computer-assisted workflows where linguists manually annotate parts of the data at any step in the workflow. Having shown that the inference of correspondence patterns can be seen as the crucial step underlying the reconstruction of proto-forms, the method presented here provides a basis for many additional approaches in the fields of computational historical linguistics and computer-assisted language comparison. Among these are (a) automatic approaches for linguistic reconstruction, (b) alignment-based approaches to phylogenetic reconstruction, (c) the detection of borrowings and erroneous cognates, and (d) the prediction of missing reflexes in the data. The approach is not perfect in its current form, and many kinds of improvements are possible. Given its novelty, however, I consider it important to share the approach its current form, hoping that it may inspire colleagues in the field to expand and develop it further. The supplementary material contains the Python package, a short tutorial (as interactive Jupyter notebook and HTML) along with data illustrating how to use it, all the code that is needed to replicate the analyses discussed in this study along with usage instructions, the test and training data, and the expanded EDICTOR version in which correspondence patterns can be inspected in various interactive ways. The supplementary material has been submitted to the Open Science Framework for anonymous review. It can be accessed from the link https://osf.io/mbzsj/?view_only= b7cbceac46da4f0ab7f7a40c2f457ada.
The paper describes a new algorithm by which sound correspondence patterns for multiple languages can be inferred.
961
scitldr
We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks. The approximation replaces a rank-4 gradient learning tensor, which describes how past hidden unit activations affect the current state, by a simple reduced-rank product of a sensitivity weight and a temporal eligibility trace. In this structured approximation motivated by node perturbation, the sensitivity weights and eligibility kernel time scales are themselves learned by applying perturbations. The rule represents another step toward biologically plausible or neurally inspired ML, with lower complexity in terms of relaxed architectural requirements (no symmetric return weights), a smaller memory demand (no unfolding and storage of states over time), and a shorter feedback time. Animals and humans excel at learning tasks that involve long-term temporal dependencies. A key challenge of learning such tasks is the problem of spatiotemporal credit assignment: the learner must find which of many past neural states is causally connected to the currently observed error, then allocate credit across neurons in the brain. When the time-dependencies between network states and errors are long, learning becomes difficult. In machine learning, the current standard for training recurrent architectures is Backpropagation Through Time (, BID18). BPTT assigns temporal credit or blame by unfolding a recurrent neural network in time up to a horizon length T, processing the input in a forward pass, and then backpropagating the error back in time in a backward pass (see FIG0).From a biological perspective, BPTT -like backpropagation in feedforward neural networks -is implausible for many reasons. For each weight update, BPTT requires using the transpose of the recurrent weights to transmit errors backwards in time and assign credit for how past activity affected present performance. Running the network with transposed weights requires that the network either has two-way synapses, or uses a symmetric copy of the feedforward weights to backpropagate error. In either case, the network must alternatingly gate its dynamical process to run forward or backward, and switch from nonlinear to linear dynamics, depending on whether activity or errors are being sent through the network. From both biological and engineering perspectives, there is a heavy memory demand: the complete network states, going T timesteps back in time, must be stored. The time-complexity of computation of the gradient in BPTT scales like T, making each iteration slow when training tasks with long time scale dependencies. Although T should match the length of the task or the maximum temporal lag between network states and errors for unbiased gradient learning, in practice T is often truncated to mitigate these computational costs, introducing a bias. The present work is another step in the direction of providing heuristics and relaxed approximations to backpropagation-based gradient learning for recurrent networks. KeRNL confronts the problems of efficiency and biological plausibility. It replaces the lengthy, linearized backward-flowing backpropagation phase with a product of a forward-flowing temporal eligibility trace and a spatial sensitivity weight. Instead of storing all details of past states in memory, synapses integrate their past activity during the forward pass (see FIG0). The network does not have to recompute the entire gradient at each update, as the time-scales of the eligibility trace and the sensitivity weight are learned over time. In recent years, much work has been devoted to implementing backpropagation algorithms in a more biologically plausible way, partly in the hope that more plausible implementations might also be simpler. The symmetry requirement between the forwards and backwards weights can be alleviated by using random return weights BID9 and BID10 ), however, learning still requires a separate backward pass through a network with linearized dynamics. Neurons may be able to extract error information in the time derivative of their firing rates using an STDP-like learning rule BID0 ), with error backpropagation computed as a relaxation to equilibrium BID14 ), at least for learning fixed points. Other work has focused on replacing batch learning with online learning. Typically, BPTT is implemented in a setting where data is prepared into batches of fixed sequence length T and used to perform learning in a T -step unrolled graph; however, online learning, with a constant stream of data error signals, is a more natural description of how the world supplies a learning system with data. BPTT without truncation struggles with online learning, as it must repeatedly backpropagate the error all the way through a continuously expanding graph. Since computation of the unbiased gradient scales with the length of the graph, gradient computation increases linearly with time. For a task with T timesteps, the total computation of the gradients scales like T 2.Real Time Recurrent Learning and Unbiased Online Gradient Optimization (UORO, BID15, BID11) deal with this issue by keeping track of how the synaptic weights affect the hidden state in a feedforward way. Decoupled Neural Interfaces (DNI BID6) estimates the truncated part of the gradient by continually predicting the future loss with respect to the hidden state. KeRNL offers this same advantage, in addition to other benefits. RTRL requires that the network keep track of an unwieldy rank-3 tensor, which could not be stored by any known biological entities. UORO factorizes this into rank-2 objects but still requires non-local computations like vector norm operations. Finally, DNI requires an entire separate network to keep track of the synthetic gradient. KeRNL is distinguished by its simplicity, requiring only rank-2 tensors. All computations are local, and synapses need to integrate over only a few relevant quantities. Consider a single-layer RNN in discrete time (indexed by t) with readout, input, and hidden layer activations given by y t, x t, and h t, respectively (boldface represents vectors, with vector entries denoting the activity of individual units). The dynamics of the recurrently connected hidden units are given by: DISPLAYFORM0 where W rec, W in are the recurrent and input weights, b are the hidden biases, σ is a general pointwise non-linearity, and g t represents the summed inputs (pre-nonlinearity) to the neurons at time t. The readout is given by DISPLAYFORM1 whereŷ T is the target output, in the case where error feedback is received at the end of an episode of length T, and C = T t=0 C t when errors DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 is the gradient of the cost with respect to the current hidden state, β ij is a set of learned sensitivity weights, and DISPLAYFORM5 is a local eligibility trace consisting of a temporally filtered version of the product of presynaptic activation and a postsynaptic activity factor. The temporal filter or kernel, K, in the eligibility has learnable time-scales; in this manuscript we use the simplest version of a lowpass temporal filter, a decaying exponential with a single time-constant γ j per neuron: K(τ, γ j) = exp(−γ j τ), though one can imagine many other function choices with multiple timescales. The role of the eligibility is to specify how strongly a synapse W jk should be held responsible for any errors in neuron j at the present time, on the basis on how far in the past the presynaptic neuron k was active. Here s DISPLAYFORM6 } stands in for the activation of the neuron presynaptic to the synapse being updated.1. Since the eligibility trace can be computed during the forward pass, KeRNL does not require backpropagating the error through time. Furthermore, KeRNL only uses at most rank-2 tensors, so neurons and synapses could plausibly do all of the required computation. The contrast between BPTT and KeRNL is depicted in FIG0,b. KeRNL emerges from the following Ansatz: DISPLAYFORM7 We call ∂h DISPLAYFORM8, a key term in the computation of the gradient, the sensitivity tensor in an extension of the usage in BID1 ). This sensitivity describes how the activity of neuron j at a previous time t − τ affects the activity of neuron i at the current time t. While the true sensitivity is a 4-index tensor summarizing many interactions based on the many paths through which activity propagates forward in a recurrent network, we approximate it with a product of a (learnable) rank-2 sensitivity weight matrix β and a temporal kernel K with (learnable) inversetime coefficients γ. The sensitivity weights β ij describe how strongly neuron j affects neuron i on average, while the temporal kernel describes how far into the future the activity of a neuron affects the other neurons for learning. We describe how to learn these parameters (β, γ) in the next section. We arrive at KeRNL by using our Ansatz for the sensitivity in the computation of a gradientbased weight update, instead of using the true sensitivity. First, we write down the full gradient rule for a recurrent network. If the parameters W ij are treated as functions that can vary over time during a trial, then the derivative can be written as a functional derivative: DISPLAYFORM9 This is simply mathematical notation for the "unfolding-in-time" trick, in which the network and weights are assumed to be replicated for each time-step of the dynamics of a recurrent network, and separate gradients are computed for each time-replica of the weights; the actual weight updates are simply the average of the separate weight variations for each time-replica. We next apply the sensitivity lemma BID1 to express gradients with respect to weights as gradients with respect to input activations, times the presynaptic activity: DISPLAYFORM10 DISPLAYFORM11 By replacing the sensitivity δh i (t)/δh j (t − τ) with our Ansatz, we arrive at our learning rule, KeRNL.The time-dependent part of the computation-a leaky integral of the product of the presynaptic activity multiplied by the instantaneous change in the postsynaptic activity-can be computed during the forwards pass, without any backpropagation of activity or error signals. For our Ansatz to align as well as possible with the gradient, we allow the sensitivity weights β and inverse-timescales γ to be learned. We learn these parameters by tracking the effect of small i.i.d. hidden perturbations ξ during the forward pass. In order to do so our hidden neurons must store two values, the true hidden state h, and a perturbed hidden stateh, which is generated by applying noise to the neurons during the forward pass: DISPLAYFORM0 The effect of previous noise on the current hidden state can be computed using the sensitivitỹ DISPLAYFORM1 We train γ, β to predict the network's response to these noisy perturbations. We take gradients with respect to the objective function. DISPLAYFORM2 which we have generated by substituting our Ansatz into. Taking gradients with respect to this objective function gives us the following update rule for the sensitivity weights and inverse-timescales. DISPLAYFORM3 represents the error in reconstructing the effect of the perturbation via the sensitivity weights and DISPLAYFORM4 are integrals that neuron h j performs over the applied perturbation ξ. In our implementation, we update these parameters immediately before we compute the gradient using. The full update rule is described in the pseudocode table.2 If we don't care about the size of the gradients and only the direction, we can use the cost function DISPLAYFORM5 where DISPLAYFORM6. This cost function trains the parameters to predict the correct direction of the perturbed hidden state minus the hidden state and works for algorithms where the gradient is divided by a running average of its magnitude (RMSProp, Adam). DISPLAYFORM7 We test KeRNL on several benchmark tasks that require memory and computation over time, showing that it is competitive with BPTT across these tasks. We implemented batch learning with KeRNL and BPTT on two tasks: the adding problem BID4; BID5 ) and pixel-by-pixel MNIST BID8 ). We implemented an online version of KeRNL with an LSTM network on the A n, B n task BID3 ) to compare with from the UORO algorithm BID15 ). The tuned hyperparameters for BPTT and KeRNL were the learning rate, η, and the gradient clipping parameter, gc BID12 ). For KeRNL, we additionally permitted a shared learning rate parameter for the sensitivity weights and kernels, η m. In practice, the same hyperparameter settings η, gc tended to work well for both BPTT and KeRNL. The additional hyperparameter for KeRNL, η m, did not need to be find tuned, and often worked well across a broad range (across several orders of magnitude, so long as it not too small but smaller than η).We implemented both the RMSprop BID16 ) and Adam (Kingma & Ba FORMULA0) optimizers and reported the best . In the adding problem, the network receives two input streams, one a sequence of random numbers in, and the second a mask vector of zeros, with two entries set randomly to one in each trial. The network's task is to sum the input from the first stream whenever there is a non-zero entry in the second. This task requires remembering sparse pieces of information over long time scales and ignoring long sequences of noise, which is difficult for RNNs when the sequences are long. We tested the performance of two networks on a variety of sequence lengths, up to 400, using both BPTT and KeRNL, Table 2. The networks were an IRNN, which is an RNN with a ReLU non-linearity where the recurrent weight matrix is initialized to identity, and a RNN with tanh nonlinearity. The implementation details are described in Appendix A.Untruncated BPTT applied to an IRNN performed very well on this task, but less so on the RNN with tanh nonlinearity. KeRNL was somewhat unstable on the IRNN, but it outperformed BPTT with the tanh nonlinearity FIG3.We believe that KeRNL outperforms BPTT on the tanh nonlinearity because our Ansatz allows the sensitivity tanh nonlinearity. By applying gradients generated by our Ansatz (instead of the true gradients) we push our network toward a solution with longer time scales via a feedback alignment-like mechanism BID9 BID10 ), as schematized in FIG0.To investigate the importance of learning the kernel timescales, we implemented KeRNL without training the sensitivity weights (β) or the inverse timescales (γ). When these parameters are not learned, KeRNL is still able to perform the task for the shorter 200-length sequence (Table 2) implying that a feedback-alignment-like mechanism BID9 BID10 ) may be enabling learning even when the error signals are not delivered along the instantaneous gradients. For longer sequences, however, learning the sensitivity and timescale parameters is important. Surprisingly, learning the inverse timescales is even more important than learning the sensitivity weights. We hypothesize that as long as the timescales over which error is correlated with outcome are appropriate, sensitivity weights are relatively less important because of feedback-alignment-like mechanisms. We show an example of how the timescales may change in FIG5. Our second task is pixel-by-pixel MNIST BID8. Here the RNN is given a stream of pixels left-to-right, top-to-bottom for a given handwritten digit from the MNIST data set. At the end of the sequence, the network is tasked with identifying the digit it was shown. This problem is difficult, as the RNN must remember an long sequence of 784 singly-presented pixels. We tuned over the same hyperparameters as in the adding problem, looking at performance after 100, 000 minibatches. Neither KeRNL nor BPTT worked well with a tanh nonlinearity, but both performed relatively well on an IRNN, FIG5. KeRNL preferred a slightly lower learning rate η than BPTT. While the KeRNL algorithm is able to learn almost as quickly on pixel-by-pixel MNIST, it does not reach as high an asymptotic performance. Still, it performs reasonably well relative to BPTT on the task. Table 2: Learning of KeRNL parameters. Left: Histogram of inverse time coefficients before training (blue) and after 7 × 10 4 minibatches (orange) on the adding problem FORMULA3: the network learns the relative importance of certain time-scales. Right: Examining the relative importance of learnable parameters in KeRNL: Performance on BPTT and various versions of KeRNL using a tanh RNN after 7 × 10 4 minibatches: fixing the sensitivities, β, while learning the inverse timescales, γ, is better than doing the reverse. While KeRNL is comparable in speed to BPTT for batch learning, we expect it to be significantly faster for online learning when the time-dependencies are of length T. Untruncated BPTT requires information sent back T steps in time for each weight update, thus the wallclock speed of computation of the gradients at each weight update in online learning scales as T, and the total scaling is thus of order T 2. If BPTT updates are truncated S < T steps back in time, the scaling is ST. KeRNL requires no backward unrolling in time, thus online KeRNL requires only O time per weight update, for a total scaling of T. As a , optimized-speed online-KeRNL should run faster than truncated online BPTT by a factor T when the trunctation time is similar to the total time-dependencies in the problem. We tested the performance of online KeRNL against UORO, another online learning algorithm, and online BPTT on the A n, B n task, where the network must predict the next character in a stream of letters. Each stream consists first of a sequence of n As followed by a sequence of n Bs. The length, n, of the sequences is randomly generated in some range. The network cannot solve this task perfectly, as it can not predict the number of As before it has seen the sequence, but can do well by matching the number of Bs to the number of As. We generated n ∈ {1, 32}. The minimum achievable average bit-loss for this task is 0.14.To compare with in the literature, we implemented KeRNL in an LSTM layer, with h representing a concatenation of the hidden and cell states (Details in Appendix B). Instead of optimizing common hyperparameters, we simply used the values from BID15, which included decaying the learning rate in time as η t = η/(1 + α √ t). However, we varied the learning rate η m, with η DISPLAYFORM0 Results other than those for KeRNL are from BID15 Table 5: Average cross-entropy bit-loss (over 10 4 minibatches) on the online A n, B n task after 10 6 minibatches entropy. Although 17-step BPTT and UORO outperformed KeRNL, we expect speed-optimized versions of KeRNL to be much faster (wall clock speed) in direct comparisons. To test how computation time for truncated-BPTT and KeRNL compare in the online setting, we implemented a dummy RNN, where the required tensor operations were performed using a random vector for both the input data and the error signal TAB5, both algorithms were implemented in Python for uniformity; Details in Appendix A). KeRNL is faster than truncated BPTT beyond very short truncation lengths. Step BPTT 3Step BPTT 10Step BPTT 20Step BPTT CPU Time 14.1 4.23 7.22 17.8 30.9 In this paper we show that KeRNL, a reduced-rank and forward-running approximation to backpropagation in RNNs, is able to perform roughly comparably to BPTT on a range of hard RNN tasks with long time-dependencies. One may view KeRNL as imposing a strong prior on the way in which neural activity from the past should be assigned credit for current performance, through the choice of the temporal kernels K in the eligibility trace, and the choice of the sensitivity weights β. This product of two rank-2 tensors in KeRNL (replacing the rank-4 sensitivity tensor for backpropagation in RNNs), assumes that the strength of influence of a neuron on another at fixed time-delay can be summarized by a simple sensitivity weight matrix, β ki, 4, and a decay due to the time difference given by K. This strong simplifying assumption is augmented or mitigated by the ability to (meta)learn the parameters of the sensitivity weights and kernels in the eligibility trace, giving the rule simultaneous simplicity and flexibility. The form of the KeRNL ansatz or prior, if wellsuited to learning problems in recurrent networks, serves as a regularizer on the types of solutions the network can find, and could even, for good choices of kernel K, provide better solutions than BPTT. We present limited evidence that KeRNL may combat the vanishing gradient problem with tanh units by imposing a prior of long time-dependencies through the eligibility. Finally, we show that KeRNL can be implemented online, where it has a shorter computation cycle than BPTT.KeRNL is a step toward biologically plausible learning. It eschews the segmented two phase backpropagation algorithm for a computation that is largely feedforward. It does not require the segmentation and storage of all past states, instead using an integrated activity or eligibility trace, and it gives rise to a naturally asymmetric structure that is more similar to the brain. While we show empirically that KeRNL performs hill-climbing, there is no guarantee that the gradients computed by KeRNL are unbiased. In the future, we hope to show empirically that KeRNL is able to perform well on more realistic tasks, and obtain some analytical guarantees on the performance of KeRNL. We hope the present contribution inspires more work on training RNNs with shorter, more plausible feedback paths. More generally, we hope that the present work shows how, with the use of reduced-rank tensor products and eligibility traces, to construct entire nested families of relaxed approximations to gradient learning in RNNs. For the adding problem and pixel-by-pixel MNIST, we tested performance by varying η and gc over several orders of magnitude: η = {1e − 03, 1e − 04, 1e − 05, 1e − 06, 1e − 07}, gc = {1, 10, 100}, using both Adam BID7 ) and RMSProp BID16 ). We then varied ηm = {1e − 03, 1e − 04, 1e − 05, 1e − 06, 1e − 07} on KeRNL. We found that KeRNL was relatively robust across ηm. For all sequence lengths, we used the hyperparameters that performed best on the task with sequence length 400. Besides the recurrent weights of the IRNN, all other weight matrices were initialized using Xavier initialization BID2 ). We initialized β with Xavier initialization for the tanh RNN, and to the identity for the IRNN. This choice was motivated by the initial sensitivity of the IRNN (we used the alternative cost function described in footnote 2. We trained on both of these tasks using the Python numpy (Walt et al. FORMULA0) package. For the dummy RNN, we used the Python numpy package BID17 ) to perform matrix algebra on a RNN with 100 hidden nodes, 100 input nodes and a tanh nonlinearity. We called "matmul" for matrix multiplication and "einsum" for other tensor operations. We used the "tanh" and "cosh" functions to compute the nonlinearity and its derivatives. In this section we describe how to implement KeRNL on an in more detail. The dynamics of the LSTM (without peepholes) are as follows DISPLAYFORM0 where net t j represents the presynaptic input to f t j. The other gradients can be calculated in an analogous manner. In the interest of full disclosure, we note that KeRNL did not perform well on next word prediction on the PennTreebank dataset. We tested an LSTM network across a wide variety of learning rates and gradient clippings and were not able to achieve near state of the art performance using KeRNL.
A biologically plausible learning rule for training recurrent neural networks
962
scitldr
We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning. encoder models based on graph convolutions BID10, it is unclear whether randomwalk objectives actually provide any useful signal, as these encoders already enforce an inductive bias that neighboring nodes have similar representations. In this work, we propose an alternative objective for unsupervised graph learning that is based upon mutual information, rather than random walks. Recently, scalable estimation of mutual information was made both possible and practical through Mutual Information Neural Estimation , which relies on training a statistics network as a classifier of samples coming from the joint distribution of two random variables and their product of marginals. Following on MINE, introduced Deep InfoMax (DIM) for learning representations of highdimensional data. DIM trains an encoder model to maximize the mutual information between a high-level "global" representation and "local" parts of the input (such as patches of an image). This encourages the encoder to carry the type of information that is present in all locations (and thus are globally relevant), such as would be the case of a class label. DIM relies heavily on convolutional neural network structure in the context of image data, and to our knowledge, no work has applied mutual information maximization to graph-structured inputs. Here, we adapt ideas from DIM to the graph domain, which can be thought of as having a more general type of structure than the ones captured by convolutional neural networks. In the following sections, we introduce our method called Deep Graph Infomax (DGI). We demonstrate that the representation learned by DGI is consistently competitive on both transductive and inductive classification tasks, often outperforming both supervised and unsupervised strong baselines in our experiments. Contrastive methods. An important approach for unsupervised learning of representations is to train an encoder to be contrastive between representations that capture statistical dependencies of interest and those that do not. For example, a contrastive approach may employ a scoring function, training the encoder to increase the score on "real" input (a.k.a, positive examples) and decrease the score on "fake" input (a.k.a., negative samples). Contrastive methods are central to many popular word-embedding methods BID6 BID27 BID26, but they are found in many unsupervised algorithms for learning representations of graphstructured input as well. There are many ways to score a representation, but in the graph literature the most common techniques use classification BID32 BID13 BID24 BID16, though other scoring functions are used BID9 BID2. DGI is also contrastive in this respect, as our objective is based on classifying local-global pairs and negative-sampled counterparts. Sampling strategies. A key implementation detail to contrastive methods is how to draw positive and negative samples. The prior work above on unsupervised graph representation learning relies on a local contrastive loss (enforcing proximal nodes to have similar embeddings). Positive samples typically correspond to pairs of nodes that appear together within short random walks in the graph-from a language modelling perspective, effectively treating nodes as words and random walks as sentences. Recent work by BID2 uses node-anchored sampling as an alternative. The negative sampling for these methods is primarily based on sampling of random pairs, with recent work adapting this approach to use a curriculum-based negative sampling scheme (with progressively "closer" negative examples; BID45 or introducing an adversary to select the negative examples BID3 .Predictive coding. Contrastive predictive coding is another method for learning deep representations based on mutual information maximization. Like the models above, CPC is also contrastive, in this case using an estimate of the conditional density (in the form of noise contrastive estimation, BID14 as the scoring function. However, unlike our approach, CPC and the graph methods above are all predictive: the contrastive objective effectively trains a predictor between structurally-specified parts of the input (e.g., between neighboring node pairs or between a node and its neighborhood). Our approach differs in that we contrast global / local parts of a graph simultaneously, where the global variable is computed from all local variables. To the best of our knowledge, the sole prior works that instead focuses on contrasting "global" and "local" representations on graphs do so via (auto-)encoding objectives on the adjacency matrix BID41 ) and incorporation of community-level constraints into node embeddings BID42. Both methods rely on matrix factorization-style losses and are thus not scalable to larger graphs. In this section, we will present the Deep Graph Infomax method in a top-down fashion: starting with an abstract overview of our specific unsupervised learning setup, followed by an exposition of the objective function optimized by our method, and concluding by enumerating all the steps of our procedure in a single-graph setting. We assume a generic graph-based unsupervised machine learning setup: we are provided with a set of node features, X = {x 1, x 2, . . ., x N}, where N is the number of nodes in the graph and x i ∈ R F represents the features of node i. We are also provided with relational information between these nodes in the form of an adjacency matrix, A ∈ R N ×N. While A may consist of arbitrary real numbers (or even arbitrary edge features), in all our experiments we will assume the graphs to be unweighted, i.e. A ij = 1 if there exists an edge i → j in the graph and A ij = 0 otherwise. Our objective is to learn an encoder, E: DISPLAYFORM0 for each node i. These representations may then be retrieved and used for downstream tasks, such as node classification. Here we will focus on graph convolutional encoders-a flexible class of node embedding architectures, which generate node representations by repeated aggregation over local node neighborhoods BID10. A key consequence is that the produced node embeddings, h i, summarize a patch of the graph centered around node i rather than just the node itself. In what follows, we will often refer to h i as patch representations to emphasize this point. Our approach to learning the encoder relies on maximizing local mutual information-that is, we seek to obtain node (i.e., local) representations that capture the global information content of the entire graph, represented by a summary vector, s. In order to obtain the graph-level summary vectors, s, we leverage a readout function, R: DISPLAYFORM0, and use it to summarize the obtained patch representations into a graph-level representation; i.e., s = R(E(X, A)).As a proxy for maximizing the local mutual information, we employ a discriminator, D: DISPLAYFORM1 represents the probability scores assigned to this patch-summary pair (should be higher for patches contained within the summary).Negative samples for D are provided by pairing the summary s from (X, A) with patch representations h j of an alternative graph, (X, A). In a multi-graph setting, such graphs may be obtained as other elements of a training set. However, for a single graph, an explicit (stochastic) corruption function, C: DISPLAYFORM2 is required to obtain a negative example from the original graph, i.e. (X, A) = C(X, A). The choice of the negative sampling procedure will govern the specific kinds of structural information that is desirable to be captured as a byproduct of this maximization. For the objective, we follow the intuitions from Deep InfoMax and use a noise-contrastive type objective with a standard binary cross-entropy (BCE) loss between the samples from the joint (positive examples) and the product of marginals (negative examples). Following their work, we use the following objective DISPLAYFORM3 This approach effectively maximizes mutual information between h i and s, based on the JensenShannon divergence 2 between the joint and the product of marginals. As all of the derived patch representations are driven to preserve mutual information with the global graph summary, this allows for discovering and preserving similarities on the patch-level-for example, distant nodes with similar structural roles (which are known to be a strong predictor for many node classification tasks; BID8 . Note that this is a "reversed" version of the argument given by : for node classification, our aim is for the patches to establish links to similar patches across the graph, rather than enforcing the summary to contain all of these similarities (however, both of these effects should in principle occur simultaneously). We now provide some intuition that connects the classification error of our discriminator to mutual information maximization on graph representations. DISPLAYFORM0 be a set of node representations drawn from an empirical probability distribution of graphs, p(X), with finite number of elements, |X|, such that p(DISPLAYFORM1 be a deterministic readout function on graphs and s (k) = R(X (k) ) be the summary vector of the k-th graph, with marginal distribution p(s). The optimal classifier between the joint distribution p(X, s) and the product of marginals p(X)p(s), assuming class balance, has an error rate upper bounded by DISPLAYFORM2 the set of all graphs in the input set that are mapped to s (k) by R, i.e. DISPLAYFORM3 As R(·) is deterministic, samples from the joint, (X (k), s (k) ) are drawn from the product of marginals with probability p(s (k) )p(X (k) ), which decomposes into: DISPLAYFORM4 This probability ratio is maximized at 1 when DISPLAYFORM5. The probability of drawing any sample of the joint from the product of marginals is then bounded above by DISPLAYFORM6. As the probability of drawing (DISPLAYFORM7, we know that classifying these samples as coming from the joint has a lower error than classifying them as coming from the product of marginals. The error rate of such a classifier is then the probability of drawing a sample from the joint as a sample from product of marginals under the mixture probability, which we can bound by Err ≤ DISPLAYFORM8, with the upper bound achieved, as above, when R(·) is injective for all elements of {X (k) }.It may be useful to note that DISPLAYFORM9 The first is obtained via a trivial application of Jensen's inequality, while the other extreme is reached only in the edge case of a constant readout function (when every example from the joint is also an example from the product of marginals, so no classifier performs better than chance). Corollary 1. From now on, assume that the readout function used, R, is injective. Assume the number of allowable states in the space of s, | s|, is greater than or equal to |X|. Then, for s, the optimal summary under the classification error of an optimal classifier between the joint and the product of marginals, it holds that | s | = |X|.Proof. By injectivity of R, we know that s = argmin s Err *. As the upper error bound, Err *, is a simple geometric sum, we know that this is minimized when p(s (k) ) is uniform. As R(·) is deterministic, this implies that each potential summary state would need to be used at least once. Combined with the condition | s| ≥ |X|, we conclude that the optimum has | s | = |X|. Theorem 1. s = argmax s MI(X; s), where MI is mutual information. Proof. This follows from the fact that the mutual information is invariant under invertible transforms. As | s | = |X| and R is injective, it has an inverse function, R −1. It follows then that, for any s, MI(X; s) ≤ H(X) = MI(X; X) = MI(X; R(X)) = MI(X; s), where H is entropy. Theorem 1 shows that for finite input sets and suitable deterministic functions, minimizing the classification error in the discriminator can be used to maximize the mutual information between the input and output. However, as was shown in, this objective alone is not enough to learn useful representations. As in their work, we discriminate between the global summary vector and local high-level representations. DISPLAYFORM10 be the neighborhood of the node i in the k-th graph that collectively maps to its high-level features, DISPLAYFORM11 where n is the neighborhood function that returns the set of neighborhood indices of node i for graph X (k), and E is a deterministic encoder function. Let us assume that DISPLAYFORM12 Proof. Given our assumption of |X i | = | s|, there exists an inverse X i = R −1 (s), and therefore DISPLAYFORM13 ) mapping s to h i. The optimal classifier between the joint p(h i, s) and the product of marginals p(h i)p(s) then has (by Lemma 1) an error rate upper bound of DISPLAYFORM14. Therefore (as in Corollary 1), for the optimal DISPLAYFORM15 which by the same arguments as in Theorem 1 maximizes the mutual information between the neighborhood and high-level features, MI(X DISPLAYFORM16 This motivates our use of a classifier between samples from the joint and the product of marginals, and using the binary cross-entropy (BCE) loss to optimize this classifier is well-understood in the context of neural network optimization. Assuming the single-graph setup (i.e., (X, A) provided as input), we will now summarize the steps of the Deep Graph Infomax procedure:1. Sample a negative example by using the corruption function: (X, A) ∼ C(X, A).2. Obtain patch representations, h i for the input graph by passing it through the encoder: DISPLAYFORM0 3. Obtain patch representations, h j for the negative example by passing it through the encoder: DISPLAYFORM1. Summarize the input graph by passing its patch representations through the readout function: s = R(H). This algorithm is fully summarized by Figure 1. Figure 1: A high-level overview of Deep Graph Infomax. Refer to Section 3.4 for more details. DISPLAYFORM0 We have assessed the benefits of the representation learnt by the DGI encoder on a variety of node classification tasks (transductive as well as inductive), obtaining competitive . In each case, DGI was used to learn patch representations in a fully unsupervised manner, followed by evaluating the node-level classification utility of these representations. This was performed by directly using these representations to train and test a simple linear (logistic regression) classifier. We follow the experimental setup described in BID23 and BID15 on the following benchmark tasks: classifying research papers into topics on the Cora, Citeseer and Pubmed citation networks BID35; predicting the community structure of a social network modeled with Reddit posts; and classifying protein roles within protein-protein interaction (PPI) networks BID49, requiring generalisation to unseen networks. Further information on the datasets may be found in TAB0 and Appendix A. For each of three experimental settings (transductive learning, inductive learning on large graphs, and multiple graphs), we employed distinct encoders and corruption functions appropriate to that setting (described below).Transductive learning. For the transductive learning tasks (Cora, Citeseer and Pubmed), our encoder is a one-layer Graph Convolutional Network (GCN) model BID23, with the following propagation rule: DISPLAYFORM0 where = A + I N is the adjacency matrix with inserted self-loops andD is its corresponding degree matrix; i.e. D ii = j ij. For the nonlinearity, σ, we have applied the parametric ReLU (PReLU) function BID17, and Θ ∈ R F ×F is a learnable linear transformation applied to every node, with F = 512 features being computed (specially, F = 256 on Pubmed due to memory limitations).The corruption function used in this setting is designed to encourage the representations to properly encode structural similarities of different nodes in the graph; for this purpose, C preserves the original adjacency matrix (A = A), whereas the corrupted features, X, are obtained by row-wise shuffling of X. That is, the corrupted graph consists of exactly the same nodes as the original graph, but they are located in different places in the graph, and will therefore receive different patch representations. We demonstrate DGI is stable to other choices of corruption functions in Appendix C, but we find those that preserve the graph structure in the strongest features. Inductive learning on large graphs. For inductive learning, we may no longer use the GCN update rule in our encoder (as the learned filters rely on a fixed and known adjacency matrix); instead, we apply the mean-pooling propagation rule, as used by GraphSAGE-GCN BID15: DISPLAYFORM1 with parameters defined as in Equation 3. Note that multiplying byD DISPLAYFORM2 actually performs a normalized sum (hence the mean-pooling). While Equation 4 explicitly specifies the adjacency and degree matrices, they are not needed: identical inductive behaviour may be observed by a constant attention mechanism across the node's neighbors, as used by the Const-GAT model BID39.For Reddit, our encoder is a three-layer mean-pooling model with skip connections BID18: DISPLAYFORM3 where is featurewise concatenation (i.e. the central node and its neighborhood are handled separately). We compute F = 512 features in each MP layer, with the PReLU activation for σ. Given the large scale of the dataset, it will not fit into GPU memory entirely. Therefore, we use the subsampling approach of BID15, where a minibatch of nodes is first selected, and then a subgraph centered around each of them is obtained by sampling node neighborhoods with replacement. Specifically, we sample 10, 10 and 25 neighbors at the first, second and third level, respectively-thus, each subsampled patch has 1 + 10 + 100 + 2500 = 2611 nodes. Only the computations necessary for deriving the central node i's patch representation, h i, are performed. These representations are then used to derive the summary vector, s, for the minibatch FIG0 ). We used minibatches of 256 nodes throughout training. To define our corruption function in this setting, we use a similar approach as in the transductive tasks, but treat each subsampled patch as a separate graph to be corrupted (i.e., we row-wise shuffle the feature matrices within a subsampled patch). Note that this may very likely cause the central node's features to be swapped out for a sampled neighbor's features, further encouraging diversity in the negative samples. The patch representation obtained in the central node is then submitted to the discriminator. Inductive learning on multiple graphs. For the PPI dataset, inspired by previous successful supervised architectures BID39, our encoder is a three-layer mean-pooling model with dense skip connections BID18 BID20: DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 where W skip is a learnable projection matrix, and MP is as defined in Equation 4. We compute F = 512 features in each MP layer, using the PReLU activation for σ. In this multiple-graph setting, we opted to use randomly sampled training graphs as negative examples (i.e., our corruption function simply samples a different graph from the training set). We found this method to be the most stable, considering that over 40% of the nodes have all-zero features in this dataset. To further expand the pool of negative examples, we also apply dropout BID36 to the input features of the sampled graph. We found it beneficial to standardize the learnt embeddings across the training set prior to providing them to the logistic regression model. Readout, discriminator, and additional training details. Across all three experimental settings, we employed identical readout functions and discriminator architectures. For the readout function, we use a simple averaging of all the nodes' features: DISPLAYFORM7 where σ is the logistic sigmoid nonlinearity. While we have found this readout to perform the best across all our experiments, we assume that its power will diminish with the increase in graph size, and in those cases, more sophisticated readout architectures such as set2vec BID40 or DiffPool BID46 ) are likely to be more appropriate. The discriminator scores summary-patch representation pairs by applying a simple bilinear scoring function (similar to the scoring used by): DISPLAYFORM8 Here, W is a learnable scoring matrix and σ is the logistic sigmoid nonlinearity, used to convert scores into probabilities of (h i, s) being a positive example. All models are initialized using Glorot initialization BID11 ) and trained to maximize the mutual information provided in Equation 1 on the available nodes (all nodes for the transductive, and training nodes only in the inductive setup) using the Adam SGD optimizer BID22 with an initial learning rate of 0.001 (specially, 10 DISPLAYFORM9 on Reddit). On the transductive datasets, we use an early stopping strategy on the observed training loss, with a patience of 20 epochs 3. On the inductive datasets we train for a fixed number of epochs (150 on Reddit, 20 on PPI). The of our comparative evaluation experiments are summarized in TAB1 For the transductive tasks, we report the mean classification accuracy (with standard deviation) on the test nodes of our method after 50 runs of training (followed by logistic regression), and reuse the metrics already reported in BID23 for the performance of DeepWalk and GCN, as well as Label Propagation (LP) BID48 and Planetoid BID44 )-a representative supervised random walk method. Specially, we provide for training the logistic regression on raw input features, as well as DeepWalk with the input features concatenated. Avg. pooling 0.958 ± 0.001 0.969 ± 0.002For the inductive tasks, we report the micro-averaged F 1 score on the (unseen) test nodes, averaged after 50 runs of training, and reuse the metrics already reported in BID15 for the other techniques. Specifically, as our setup is unsupervised, we compare against the unsupervised GraphSAGE approaches. We also provide supervised for two related architecturesFastGCN BID5 and Avg. pooling ).Our demonstrate strong performance being achieved across all five datasets. We particularly note that the DGI approach is competitive with the reported for the GCN model with the supervised loss, even exceeding its performance on the Cora and Citeseer datasets. We assume that these benefits stem from the fact that, indirectly, the DGI approach allows for every node to have access to structural properties of the entire graph, whereas the supervised GCN is limited to only two-layer neighborhoods (by the extreme sparsity of the training signal and the corresponding threat of overfitting). It should be noted that, while we are capable of outperforming equivalent supervised encoder architectures, our performance still does not surpass the current supervised transductive state of the art (which is held by methods such as GraphSGAN BID7). We further observe that the DGI method successfully outperformed all the competing unsupervised GraphSAGE approaches on the Reddit and PPI datasets-thus verifying the potential of methods based on local mutual information maximization in the inductive node classification domain. Our Reddit are competitive with the supervised state of the art, whereas on PPI the gap is still large-we believe this can be attributed to the extreme sparsity of available node features (over 40% of the nodes having all-zero features), that our encoder heavily relies on. We note that a randomly initialized graph convolutional network may already extract highly useful features and represents a strong baseline-a well-known fact, considering its links to the Weisfeiler- Lehman graph isomorphism test BID43, that have already been highlighted and analyzed by BID23 and BID15. As such, we also provide, as Random-Init, the logistic regression performance on embeddings obtained from a randomly initialized encoder. Besides demonstrating that DGI is able to further improve on this strong baseline, it particularly reveals that, on the inductive datasets, previous random walk-based negative sampling methods may have been ineffective for learning appropriate features for the classification task. Lastly, it should be noted that deeper encoders correspond to more pronounced mixing between recovered patch representations, reducing the effective variability of our positive/negative examples' pool. We believe that this is the reason why shallower architectures performed better on some of the datasets. While we cannot say that these trends will hold in general, with the DGI loss function we generally found benefits from employing wider, rather than deeper models. We performed a diverse set of analyses on the embeddings learnt by the DGI algorithm in order to better understand the properties of DGI. We focus our analysis exclusively on the Cora dataset (as it has the smallest number of nodes, significantly aiding clarity).A standard set of "evolving" t-SNE plots BID25 of the embeddings is given in FIG2. As expected given the quantitative , the learnt embeddings' 2D projections exhibit discernible clustering in the 2D projected space (especially compared to the raw features and Random-Init), which respects the seven topic classes of Cora. The projection obtains a Silhouette score of 0.234, which compares favorably with the previous reported score of 0.158 for Embedding Propagation BID9.We ran further analyses, revealing insights into DGI's mechanism of learning, isolating biased embedding dimensions for pushing the negative example scores down and using the remainder to encode useful information about positive examples. We leverage these insights to retain competitive performance to the supervised GCN even after half the dimensions are removed from the patch representations provided by the encoder. These-and several other-qualitative and ablation studies can be found in Appendix B. We have presented Deep Graph Infomax (DGI), a new approach for learning unsupervised representations on graph-structured data. By leveraging local mutual information maximization across the graph's patch representations, obtained by powerful graph convolutional architectures, we are able to obtain node embeddings that are mindful of the global structural properties of the graph. This enables competitive performance across a variety of both transductive and inductive classification tasks, at times even outperforming relevant supervised architectures. Transductive learning. We utilize three standard citation network benchmark datasets-Cora, Citeseer and Pubmed BID35 -and closely follow the transductive experimental setup of BID44. In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for training-however, honouring the transductive setup, the unsupervised learning algorithm has access to all of the nodes' feature vectors. The predictive power of the learned representations is evaluated on 1000 test nodes. Inductive learning on large graphs. We use a large graph dataset (231,443 nodes and 11,606,919 edges) of Reddit posts created during September 2014 (derived and preprocessed as in BID15). The objective is to predict the posts' community ("subreddit"), based on the GloVe embeddings of their content and comments BID31, as well as metrics such as score or number of comments. Posts are linked together in the graph if the same user has commented on both. Reusing the inductive setup of BID15, posts made in the first 20 days of the month are used for training, while the remaining posts are used for validation or testing and are invisible to the training algorithm. Inductive learning on multiple graphs. We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues BID49. The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain completely unobserved during training. To construct the graphs, we used the preprocessed data provided by BID15. Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database BID37, and a node can possess several labels simultaneously. Visualizing discriminator scores. After obtaining the t-SNE visualizations, we turned our attention to the discriminator-and visualized the scores it attached to various nodes, for both the positive and a (randomly sampled) negative example FIG3. From here we can make an interesting observation-within the "clusters" of the learnt embeddings on the positive Cora graph, only a handful of "hot" nodes are selected to receive high discriminator scores. This suggests that there may be a clear distinction between embedding dimensions used for discrimination and classification, which we more thoroughly investigate in the next paragraph. In addition, we may observe that, as expected, the model is unable to find any strong structure within a negative example. Lastly, a few negative examples achieve high discriminator scores-a phenomenon caused by the existence of DISPLAYFORM0 Figure 6: Classification performance (in terms of test accuracy of logistic regression; left) and discriminator performance (in terms of number of poorly discriminated positive/negative examples; right) on the learnt DGI embeddings, after removing a certain number of dimensions from the embedding-either starting with most distinguishing (p ↑) or least distinguishing (p ↓).low-degree nodes in Cora (making the probability of a node ending up in an identical context it had in the positive graph non-negligible).Impact and role of embedding dimensions. Guided by the previous , we have visualized the embeddings for the top-scoring positive and negative examples (FIG4). The analysis revealed existence of distinct dimensions in which both the positive and negative examples are strongly biased. We hypothesize that, given the random shuffling, the average expected activation of a negative example is zero, and therefore strong biases are required to "push" the example down in the discriminator. The positive examples may then use the remaining dimensions to both counteract this bias and encode patch similarity. To substantiate this claim, we order the 512 dimensions based on how distinguishable the positive and negative examples are in them (using p-values obtained from a t-test as a proxy). We then remove these dimensions from the embedding, respecting this order-either starting from the most distinguishable (p ↑) or least distinguishable dimensions (p ↓)-monitoring how this affects both classification and discriminator performance (Figure 6). The observed trends largely support our hypothesis: if we start by removing the biased dimensions first (p ↓), the classification performance holds up for much longer (allowing us to remove over half of the embedding dimensions while remaining competitive to the supervised GCN), and the positive examples mostly remain correctly discriminated until well over half the dimensions are removed. Here, we consider alternatives to our corruption function, C, used to produce negative graphs. We generally find that, for the node classification task, DGI is stable and robust to different strategies. However, for learning graph features towards other kinds of tasks, the design of appropriate corruption strategies remains an area of open research. Our corruption function described in Section 4.2 preserves the original adjacency matrix (A = A) but corrupts the features, X, via row-wise shuffling of X. In this case, the negative graph is constrained to be isomorphic to the positive graph, which should not have to be mandatory. We can instead produce a negative graph by directly corrupting the adjacency matrix. Therefore, we first consider an alternative corruption function C which preserves the features (X = X) but instead adds or removes edges from the adjacency matrix (A = A). This is done by sampling, i.i.d., a switch parameter Σ ij, which determines whether to corrupt the adjacency matrix at position (i, j). Assuming a given corruption rate, ρ, we may define C as performing the following operations: DISPLAYFORM0 DISPLAYFORM1 where ⊕ is the XOR (exclusive OR) operation. This alternative strategy produces a negative graph with the same features, but different connectivity. Here, the corruption rate of ρ = 0 corresponds to an unchanged adjacency matrix (i.e. the positive and negative graphs are identical in this case). In this regime, learning is impossible for the discriminator, and the performance of DGI is in line with a randomly initialized DGI model. At higher rates of noise, however, DGI produces competitive embeddings. We also consider simultaneous feature shuffling (X = X) and adjacency matrix perturbation (A = A), both as described before. We find that DGI still learns useful features under this compound corruption strategy-as expected, given that feature shuffling is already equivalent to an (isomorphic) adjacency matrix perturbation. From both studies, we may observe that a certain lower bound on the positive graph perturbation rate is required to obtain competitive node embeddings for the classification task on Cora. Furthermore, the features learned for downstream node classification tasks are most powerful when the negative graph has similar levels of connectivity to the positive graph. The classification performance peaks when the graph is perturbed to a reasonably high level, but remains sparse; i.e. the mixing between the separate 1-step patches is not substantial, and therefore the pool of negative examples is still diverse enough. Classification performance is impacted only marginally at higher rates of corruption-corresponding to dense negative graphs, and thus a less rich negative example pool-but still considerably outperforming the unsupervised baselines we have considered. This could be seen as further motivation for relying solely on feature shuffling, without adjacency perturbations-given that feature shuffling is a trivial way to guarantee a diverse set of negative examples, without incurring significant computational costs per epoch. Classification accuracy for (X, A) corruption X = X GCN Figure 8: DGI is stable and robust under a corruption function that modifies both the feature matrix (X = X) and the adjacency matrix (A = A) on the Cora dataset. Corruption functions that preserve sparsity (ρ ≈ 1 N) perform the best. However, DGI still performs well even with large disruptions (where edges are added or removed with probabilities approaching 1). N.B. log scale used for ρ.
A new method for unsupervised representation learning on graphs, relying on maximizing mutual information between local and global representations in a graph. State-of-the-art results, competitive with supervised learning.
963
scitldr
In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them. To this end, we advocate for the use of a \textit{backtracking model} that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state (or one that is estimated to have high value), predicts and samples which (state, action)-tuples may have led to that high value state. These traces of (state, action) pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy. We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards. Our method improves the sample efficiency of both on- and off-policy RL algorithms across several environments and tasks. Training control algorithms efficiently from interactions with the environment is a central issue in reinforcement learning (RL). Model-free RL methods, combined with deep neural networks, have achieved impressive across a wide range of domains BID18 BID39. However, existing model-free solutions lack sample efficiency, meaning that they require extensive interaction with the environment to achieve these levels of performance. Model-based methods in RL can mitigate this issue. These approaches learn an unsupervised model of the underlying dynamics of the environment, which does not necessarily require rewards, as the model observes and predicts state-to-state transitions. With a well-trained model, the algorithm can then simulate the environment and look ahead to future events to establish better value estimates, without requiring expensive interactions with the environment. Model-based methods can thus be more sample efficient than their model-free counterparts, but often do not achieve the same asymptotic performance BID6 BID28.In this work, we propose a method that takes advantage of unsupervised observations of state-to-state transitions for increasing the sample efficiency of current model-free RL algorithms, as measured by the number of interactions with the environment required to learn a successful policy. Our idea stems from a simple observation: given a world model, finding a path between a starting state and a goal state can be done either forward from the start or backward from the goal. Here, we explore an idea for leveraging the latter approach and combining it with model-free algorithms. This idea is particularly useful when rewards are sparse. High-value states are rare and trajectories leading to them are particularly useful for a learner. The availability of an exact backward dynamics model of the environment is a strong and often unrealistic requirement for most domains. Therefore, we propose learning a backward dynamics model, which we refer to as a backtracking model, from the experiences performed by the agent. This backtracking model p(s t, a t |s t+1), is trained to predict, given a state s t+1, which state s t the agent visited before s t+1 and what action a t ∼ π was performed in s t to reach s t+1. Specifically, this is a model which, starting from a future high-value state, can be used to recall traces that have ended at this high value state, that is sequences of (state, action)-tuples. This allows the agent to simulate and be exposed to alternative possible paths to reach a high value state. A final state may be a previously experienced high-value state or a goal state may be explicitly given, or even produced by the agent using a generative model of high-value states BID19. Our hypothesis is that using a backtracking model in this way should benefit learning, especially in the context of weak or sparse rewards. Indeed, in environments or tasks where the agent receives rewards infrequently, it must leverage this information effectively and efficiently. Exploration methods have been employed successfully BID2 BID19 BID32 to increase the frequency at which novel states are discovered. Our proposal can be viewed as a special kind of simulated exploration proceeding backward from presumed high-value states, in order to discover trajectories that may lead to high rewards. A backtracking model aims to augment the experience of the trajectory τ leading to a high-value state by generating other possible tracesτ that could have also caused it. To summarize: the main contribution of this paper is an RL method based on the use of a backtracking model, which can easily be integrated with existing on-and off-policy techniques for reducing sample complexity. Empirically, we show with experiments on eight RL environments that the proposed approach is more sample efficient. We consider a Markov decision process (MDP) defined by the tuple (S, A, P, r, γ), where the state space S and the action space A may be discrete or continuous. The learner is not explicitly given the environment transition probability p(s t+1 |s t, a t) for going from s t ∈ S to s t+1 ∈ S given a t ∈ A, but samples from this distribution are observed. The environment emits a bounded reward r: S × A → [r min, r max] on each transition and γ ∈ is the discount factor. Let π denote a stochastic policy over actions given states, and let R(π) = E π T t=0 γ t r(s t) denote the expected total return when policy π is followed. The standard objective in reinforcement learning is to maximize the discounted total return R(π). Throughout the text we will refer to experienced trajectories as τ = (s 1, a 1, · · ·, s T, a T) and we will refer to simulated experiences as tracesτ. We introduce the backtracking model B φ = q φ (s t, a t |s t+1), which is a density estimator of the joint probability distribution over the previous (s t, a t)-tuple parameterized by φ. This distribution is produced by both a learned backward policy π b = q(a t |s t+1) and a state generator q(s t |a t, s t+1). The backward policy predicts the previous action a t given the ing state s t+1. The state generator estimates the probability of a previous state s t given the tuple (a t, s t+1). With these models, we may decompose q φ (s t, a t |s t+1) as q(s t |a t, s t+1)q(a t |s t+1).However, for training stability with continuous-valued states, we model the density of state variation ∆s t = s t − s t+1 rather than the raw s t. Therefore, our density models are given by q φ (∆ t, a t |s t+1) = q(∆s t |a t, s t+1)q(a t |s t+1).Note that for readability, we will drop the φ-subscript unless it is necessary for clarity. Generating Recall Traces. Analogous to the use of forward models in BID31 BID4 ), we may generate a recall trace auto-regressively. To do so, we begin with a state s t+1 and sample a t ∼ q(a t |s t+1). The state generator can then be sampled to produce the change in state ∆s t ∼ q(∆s t |a t, s t+1). We can continue to unroll this process, repeating with state s t = ∆s t + s t+1 for a desired number of steps. These generated transitions are then stored as a potential traceτ which terminates at some final state. The backtracking model B φ is learned by maximum likelihood, using the policy's trajectories as observations, as described in Section 3.1.Producing Intended High Value States. Before recursively sampling from the backtracking model, we need to obtain presumed high-value states. Generally, such states will not be known in advance. However, as the agent learns, it will visit states s t with increasingly high value DISPLAYFORM0. The agent's full experience is maintained in a replay buffer B, in the form of tuples of (s t, a t, s t+1, r t). Filtering of trajectories based on the returns is done, so that only top k traj are added to the buffer. In this work, we will investigate our approach with two methods for generating the initial high-value states. The first method relies on picking the most valuable states stored in the replay buffer B. As before, a valuable state may be defined by its estimated expected return V π (s) as computed by a critic (our off-policy method) or state that received a high reward (our on-policy method).The second method is based on Goal GAN, recently introduced by BID19 where goal states g are produced via a Generative Adversarial Network BID10. In our variant, we map the goal state g to a valid point in state space s using a'decoder' D. For the point-mass, the goal and state are identical; for Ant, we use a valid random joint-angle configuration at that goal position. The backtracking model is then used to find plausible trajectories that terminate at that state. For both methods, as the learner improves, one would expect that on average, higher value states are used to seed the recall traces. In this section, we describe how to train the backtracking model and how it can be used to improve the efficiency of the agent's policy and aid with exploration. We use a maximum likelihood training loss for training of the backtracking model B φ on the top k% of the agent's trajectories stored in the state buffer B. At each iteration, we perform stochastic gradient updates based on agent trajectories τ, with respect to the following objective: DISPLAYFORM0 logq(a t |s t+1) + logq(∆s t |a t, s t+1), where s t = ∆s t +s t+1 and T is the episode length. For our chosen backtracking model, this implies a mean-squared error loss (i.e. corresponding to a conditional Gaussian for ∆s t) for continuous action tasks and a cross-entropy loss (i.e. corresponding to a conditional Multinoulli for s t given a t and s t+1) for discrete action tasks. The buffer is constantly updated with recent experiences and the backtracking model is trained online in order to encourage generalization as the distribution of trajectories in the buffer evolves. We now describe how we use the recall tracesτ to improve the agent's policy π θ. In brief, the traces τ generated by the backtracking model are used as observations for imitation learning BID33 ros, 2011; BID3 by the agent. The backtracking model will be continuously updated as new actual experiences are generated, as described in Section 3.1. Imitation learning of the policy is performed simply by maximizing the log-probability of the agent's action a t given s t given by DISPLAYFORM0 where (s t, a t)-tuples come from a generated traceτ.Our motivation for having the agent imitate trajectories from the backtracking model is two-fold: Execute policy to produce trajectory τ 5: DISPLAYFORM1 Estimate ∇ θ R(π θ) from RL algorithm 7: DISPLAYFORM2 Compute L B via Equation 2, using top k% valuable states from top k traj trajectories in B 9: DISPLAYFORM3 Obtain target high value state s (see Algorithm 2 for details)11:Generate N recall tracesτ for s using B φ (s) Compute imitation loss L I via Equation FORMULA3 13: DISPLAYFORM0 Dealing with sparse rewards States with significant return are emphasized by the backtracking model, since the traces it generates are initialized at high value states. We expect this behaviour to help in the context of sparse or weak rewards. The backtracking model can also generate new ways to reach high-value states. So even if it cannot directly discover new high-value states, it can at least point to new ways to reach known high value states, thus aiding with exploration. Thus far, we have motivated the use of a backtracking model intuitively. In this section, we provide a motivation relying on a variational perspective of RL and ideas from the wake-sleep algorithm BID21.Let R be the return of a policy trajectory τ, i.e. the sum of discounted rewards under this trajectory. Consider the event of the return R being larger than some threshold L. The probability of that event under the agent's policy is p(R > L) = τ p(R > L|τ)p(τ), where p(τ) is distribution of trajectories under policy π, p(R > L|τ) = 1 R>L and 1 A is the indicator function that is equal to 1 if A is true and is otherwise 0.Let q(τ) be any other distribution over trajectories, then we have the following classic relationship between the marginal log-probability of an observation (R > L) and the KL-divergence between q and the posterior over a latent variable (τ): DISPLAYFORM0 where DISPLAYFORM1 This suggests an EM-style training procedure, that alternates between training the variational distribution q(τ) towards the posterior p(τ |R > L) and training the policy to maximize L. In this context, we view the backtracking model and the high-value states sampler as providing q(τ) implicitly. Specifically, we assume that q factorizes temporally as in Equation 2 with the backtracking model providing q(a t |s t+1) and q(∆s t |a t, s s+1). We parameterize the approximate posterior in this way so that we can take advantage of a model of the backwards transitions to conveniently sample from q starting from a high-value final state s T. This makes sense in the context of sparse rewards, where few states have significant reward. If we have a way to identify these high-reward states, then it is much easier to obtain these posterior trajectories by starting from them. Training q(τ) by minimizing the KL(q(τ)||p(τ |R > L)) term is hard due to the direction of the KL-divergence. Taking inspiration from the wake-sleep algorithm BID21, we instead minimize the KL in the opposite direction, KL(p(τ |R > L)||q(τ)). This can be done by sampling trajectories from p(τ |R > L) (e.g. by rejection sampling, keeping only the forward-generated trajectories which lead to R > L) and maximizing their log-probability under q(τ). This recovers our algorithm, which trains the backtracking model on high-return trajectories generated by the agent. So far we have assumed a known threshold L. However, in practice the choice of L is important. While ultimately we would want L to be close to the highest possible return, at the early stages of training it cannot be, as trajectories from the agent are unlikely to reach that threshold. A better strategy is to gradually increase L. One natural way of doing this is to use the top few percentile trajectories sampled by the agent for training q(τ), instead of explicitly setting L. This approach can be thought of as providing a curriculum for training the agent that is adapted to its performance. This is also related to evolutionary methods BID17 BID1, which keep the "fittest" samples from a population in order to re-estimate a model, from which new samples are generated. This variational point of view also tells us how the prior over the last state should be constructed. The ideal prior q(s T) is simply a generative model of the final states leading to R > L. Both methods proposed to estimate q(s T) with this purpose, either non-parametrically (with the forward samples for which R > L) or parametrically (with generative model trained from those samples). Also, if goal states are known ahead of time, then we can set L as the reward of those states (minus a small quantity) and we can seed the backwards trajectories from these goal states. In that case the variational objective used to train the policy is a proxy for log-likelihood of reaching a goal state. Control as inference The idea of treating control problems as inference has been around for many years BID40 BID23 BID44 BID45 BID34. A good example of this idea is to use Expectation Maximization (EM) for RL BID5, of which the PoWER algorithm BID24 is one well-known practical implementation. In EM algorithms for RL, learning is divided between the estimation of the expectation over the trajectories conditioned on the reward observations and estimation of a new policy based on these expectation estimates. While we don't explicitly try to estimate these expectations, one could argue that the samples from the backtracking model serve a similar purpose. Variational inference has also been proposed for policy search BID29 BID26. Probabilistic views of the RL problem have also been used to construct maximum entropy methods for both regular and inverse RL BID15 ).Using off-policy trajectories By incorporating the trajectories of a separate backtracking model, our method is similar in spirit to approaches which combine on-policy learning algorithms with off-policy samples. Recent examples of this, like the interpolated policy gradient BID14, PGQ BID30 and ACER BID46, combine policy gradient learning with ideas for off-policy learning and methodology inspired by Q-learning. Our method differs by using the backtracking model to obtain off-policy trajectories and is, as an idea, independent of the specific model-free RL method it is combined with. Our work to effectively propagate value updates backwards is also related to the seminal work of prioritized sweeping .Model-based methods A wide range of model-based RL and control methods have been proposed in the literature BID8. PILCO (b), is a model-based policy search method to learn a probabilistic model of dynamics and incorporate model uncertainty into long-term planning. The classic Dyna (Sutton) algorithm was proposed to take advantage of a model to generate simulated experiences, which could be included in the training data for a modelfree algorithm. This method was extended to work with deep neural network policies, but performed best with models that were not neural networks BID13. Other extensions to Dyna have also been proposed BID38 BID22 BID18.Other approaches have also been proposed to combine advantages of both value and policy-based approaches BID27 BID41. Finally, BID9 ) is concurrent work to this and also proposes training on imagined reversal steps from known goal states. BID11 proposed to use similar learning rule in the context of generative models to modify the parameters of transition operator to make the reverse of this heated trajectory more likely under a reverse cooling process. Our experimental evaluation aims to understand whether our method can improve the sample complexity of off-policy as well as on-policy RL algorithms. Practically, we must choose the length of generated backward traces. Longer traces become increasingly likely to deviate significantly from the traces that the agent can generate from its initial state. Therefore, in our experiments, we sample fairly short tracesτ from the backtracking model, where the length is adjusted manually based on the time-scale of each task. We empirically show the following across different experimental settings:• Samples from the true backtracking model can be used to improve sample efficiency.• Using a learned backtracking model starting from high value states accelerates learning for off-policy as well as on-policy experiments.• Modeling parametrically and generating high value states (using GoalGAN) also helps. Here, we aim to check if in the ideal case when the true backtracking model is known, the proposed approach works. To investigate this, we use the four-room environment from BID35 of various dimensions. The 4-room grid world is a simple environment where the agent must navigate to a goal position to receive a positive reward through bottleneck states (doorways). We compare the proposed method to the scenario where the policy is trained through the actor-critic method BID25 with generalized advantage estimation (GAE) BID37 Finding the goal state becomes more challenging as the dimension increases due to sparsity of rewards. Therefore, we expect that the backtracking model would be a more effective tool in larger environments. Confirming this hypothesis, we see in FIG1 that, as we increase the dimensionality of the maze, sample efficiency increases thanks to recall traces, compared to our baseline. Here we aim to compare the performance of recall traces with Prioritized Experience Replay (PER). PER stores the past experiences in a buffer and then selectively trains on high value experiences. We again use the Four-room Environment. PER gives an optimistic bias to the critic, and while it allows the reinforcement of sparse rewards, it also converges too quickly to an exploitation mode, which can be difficult to get out of. In order to show this, we plot the state visitation counts of a policies trained with PER and recall traces and see that the latter visit more states. We show in FIG3 that while PER is competitive in the smaller 11x11 environment, recall traces outperform it in the larger 15x15 environment. FIG2 also shows how the use of a backtracking model and recall traces pushes the policy to visit a wider variety of grid positions than PER. One situation in which states to start the backtracking model at are naturally available, is when the method is combined with an algorithm for sub-goal selection. We chose to investigate how well the backtracking model can be used in conjunction with the automatic goal generation algorithm from BID19, Goal GAN. It uses a Generative Adversarial Network to produce sub-goals at the appropriate level of difficulty for the agent to reach. As the agent learns, new sub-goals of increasing difficulty are generated. This way, the agent is pressured to explore and learn to be able to reach any location in the state space. We hypothesize that the backtracking model should help the agent to reach the sub-goals faster and explore more efficiently. Hence, in this learning scenario, what changes is that high value states are now generated by Goal GAN instead of being selected by a critic from a replay buffer. We performed experiments on the U-Maze Ant task described in BID19. It is a challenging robotic locomotion task where a quadruped robot has to navigate its center of mass within some particular distance of a target goal. The objective is to cover as much of the space of the U-shaped maze as possible. We find that using the backtracking model improves data efficiency, by reaching a coverage of more than 63% in 155 steps, instead of 275 steps without it FIG4. More visualizations and learning curves for U-Maze Ant task as well as the N-Dimensional Point Mass task can be found in Appendix FIG0 and 13). We conducted robotic locomotion experiments using the MuJoCo simulator BID43. We use the same setup as BID28. We compare our approach with a pure model-free method on standard benchmark locomotion tasks, to learn the fastest forward-moving gait possible. The model-free approach we consider is the rllab implementation of trust region policy optimization (TRPO) BID36. For the TRPO baseline we use the same setup as BID28. See the appendix for the model implementation details. The in Figure 6 show that our method consistently outperforms TRPO on all of the benchmark tasks in terms of final performance, and learns substantially faster. Figure 6: Our model as compared to TRPO. For TRPO baselines, except walker, we ran with 5 different random seeds. For our model, we ran with 5 different random seeds. Here, we evaluate on the same range of challenging continuous control tasks from the OpenAI gym benchmark suite. We compare to Soft Actor Critic (SAC) BID16, shown to be more sample efficient compared to other off-policy algorithms such as DDPG BID18 and which consistently outperforms DDPG. Part of the reason for choosing SAC instead of DDPG was that the latter is also known to be more sensitive to hyper-parameter settings, limiting its effectiveness on complex tasks BID20. For SAC, we use the same hyper-parameters reported in BID16. Implementation details for our model are listed in the Appendix. Performance across tasks -The in Figure 7 show that our method consistently improves the performance of SAC on all of the benchmark tasks, leading to faster learning. In fact, the largest improvement is observed on the hardest task, Ant. Figure 7: Our model as compared to SAC. We ran SAC baselines with 2 different random seeds. For our model, we ran with 5 different random seeds. We advocate for the use of a backtracking model for improving sample efficiency and exploration in RL. The method can easily be combined with popular RL algorithms like TRPO and soft actorcritic. Our indicate that the recall traces generated by such models are able to accelerate learning on a variety of tasks. We also show that the method can be combined with automatic goal generation. The Appendix provides more analysis on the sensitivity of our method to various factors and ablations. We show that a random model is outperformed by a trained backtracking model, confirming its usefulness and present plots showing the effect of varying the length of recall traces. For future work, while we observed empirically that the method has practical value and could relate its workings from a variational perspective, but more could be done to improve our theoretical understanding of its convergence behavior and what kind of assumptions need to hold about the environment. It would also be interesting to investigate how the backtracking model can be combined with forward models from a more conventional model-based system. Return argmax(V (s)) ∀s ∈ B 7: end if In FIG7 we show the performance in learning efficiency when the length of the backward traces is varied. The backtracking model we used for all the experiments consisted of two multi-layer perceptrons: one for the backward action predictor Q(a t |s t+1) and one for the backward state predictor Q(s t |a t, s t+1). Both MLPs had two hidden layers of 128 units. The action predictor used hyperbolic tangent units while the inverse state predictor used ReLU units. Each network produced as output the mean and variance parameters of a Gaussian distribution. For the action predictor the output variance was fixed to 1. For the state predictor this value was learned for each dimension. We do about a hundred training-steps of the backtracking model for every 5 training-steps of the RL algorithm. For training the backtracking model we maintain a buffer which stores states (state, action, nextstate, reward) yielding high rewards. We sample a batch from the buffer and then normalize the states, actions before training the backtracking model on them. During the sampling phase we feed in the normalized nextstate in the backward action predictor Q(a t |s t+1) to get normalized action. Then we input this normalized action in the backward state predictor Q(s t |a t, s t+1) to get normalized previous state. We then un-normalize the obtained previous states and action using the corresponding mean and variance to compute the Imitation loss. This is required for stability during sampling. FOUR ROOM ENVIRONMENT An agent has to navigate within − distance of the goal position at the end of a U-shaped maze. For the Point Mass U-Maze navigation, we use high return trajectories for training the parameters of backtracking model (i.e those trajectories which reach sub-goals defined by Goal GAN). We sample a trajectory of length 20 from our backtracking model and the recall traces are used to improve the policy via imitation learning. For the Ant U-Maze navigation, we train identically, only with length-50 traces. FIG0. It is clear from the figure that it is helpful to learn the backward action predictor Q(a t |s t+1). FIG0 shows the same comparison for Ant-v1, and Walker2d-v1 where the baseline is Soft Actor Critic (SAC). For our baseline i.e the scenario when backtracking model is not learned, we experimented with all the lengths 1,2,3,4,5, 10 and we choose the best one as our baseline. As you can see in the FIG0, backtracking model performs best as compared to SAC baseline as well as to the scenario when backtracking model is not trained. Hence proving that backtracking model is not just doing random search. Dyna algorithm uses a forward model to generate simulated experience that could be included in a model-free algorithm. This method can be used to work with deep neural network policies, but performed best with models which are not neural networks BID12. Our intuition says that it might be better to generate simulated experience from backtracking model (starting from a high value state) as compared to forward model, just because we know that traces from backtracking model are good, as they lead to high value state, which is not really the case for the simulated experience from a forward model. FIG0 we evaluate the Forward model with On-Policy TRPO on Ant and Humanoid Mujoco tasks. We were not able to get any better on with forward model as compared to the Baseline TRPO, which is consistent with the findings from BID12. Building the backward model is necessarily neither harder nor easier. Realistically, building any kind of model and having it be accurate for more than, say, 10 time steps is pretty hard. But if we only have 10 time steps of accurate transitions, it is probably better to take them backward model from different states as compared to from forward model from the same initial state. (as corroborated by our experiments).Something which remains as a part of future investigation is to train the forward model and backtracking model jointly. As backtracking model is tied to high value state, the forward model could extract the goal value from the high value state. When trained jointly, this should help the forward model learn some reduced representation of the state that is necessary to evaluate the reward. Ultimately, when planning, we want the model to predict the goal accurately, which helps to optimize for this "goal-oriented" behaviour directly. This also avoids the need to model irrelevant aspects of the environment. Here, we show the of various ablations in the four-room environment which highlight the effect various hyperparameters have on performance. In FIG0 we train on the recall traces after a fixed number of iterations of learning in the true environment. For all of the environments, as we increase the ratio of updates in the true environment to updates using recall traces from backward model, the performance decreases significantly. This again highlights the advantages of learning from recall traces. In FIG0, we see the effects of training from the recall traces multiple times for every iteration of training in the true environment. We can see that as we increase the number of iteration of learning from recall traces, we correspondingly need to choose smaller trace length. For each update in the real environment, making more number of updates helps if the trace length is smaller, and if the trace length is larger, it has a detrimental effect on the learning process as is seen in 18. Also as observed in the cases of 12x12 and 14x14 environment, it may happen that for an increased ratio of learning from recall traces and high trace length, the model achieves the maximum reward initially but after sometime the average reward plummets.: Legend indicates the ratio of updates in the true environment to updates using recall traces. Here we learn more in the true environment than using traces. We can see that not learning regularly from recall traces gives decreased performance. FIG0: Legend indicates the ratio of updates in the true environment to updates using recall traces with the trajectory length of traces in parenthesis. Here we learn more using recall traces than actual environment. This tells that more updates using recall traces helps if we have a correspondingly lower trajectory length We investigate the effect of doing more updates from the generated recall traces on some Mujoco tasks using Off-Policy SAC. As can be seen from 19, we find that using more traces helps and that for an increased number of updates we need to correspondingly shorten the trajectory length of the sampled traces. These experiments show that there is a balance between how much we should train in the actual environment and how much we should learn from the traces generated from the backward model. In the smaller four room-environment, 1:1 balance performed the best. In Mujoco tasks and larger four room environments, doing more updates from the backward model helps, but in the smaller four room maze, doing more updates is detrimental. So depending upon the complexity of the task, we need to decide this ratio.
A backward model of previous (state, action) given the next state, i.e. P(s_t, a_t | s_{t+1}), can be used to simulate additional trajectories terminating at states of interest! Improves RL learning efficiency.
964
scitldr
Prepositions are among the most frequent words. Good prepositional representation is of great syntactic and semantic interest in computational linguistics. Existing methods on preposition representation either treat prepositions as content words (e.g., word2vec and GloVe) or depend heavily on external linguistic resources including syntactic parsing, training task and dataset-specific representations. In this paper we use word-triple counts (one of the words is a preposition) to capture the preposition's interaction with its head and children. Prepositional embeddings are derived via tensor decompositions on a large unlabeled corpus. We reveal a new geometry involving Hadamard products and empirically demonstrate its utility in paraphrasing of phrasal verbs. Furthermore, our prepositional embeddings are used as simple features to two challenging downstream tasks: preposition selection and prepositional attachment disambiguation. We achieve comparable to or better than state of the art on multiple standardized datasets. Prepositions are a linguistically closed class comprising some of the most frequent words; they play an important role in the English language since they encode rich syntactic and semantic information. Many preposition-related tasks still remain unsolved in computational linguistics because of their polysemous nature and flexible usage patterns. An accurate understanding and representation of prepositions' linguistic role is key to several important NLP tasks such as grammatical error correction and prepositional phrase attachment. A first-order approach is to represent prepositions as real-valued vectors via word embeddings such as word2vec BID21 and GloVe BID25.Word embeddings have brought a renaissance in NLP research; they have been very successful in capturing word similarities as well as analogies (both syntactic and semantic) and are now mainstream in nearly all downstream NLP tasks (such as question-answering). Despite this success, no specific properties of word embeddings of prepositions have been highlighted in the literature. Indeed, many of the common prepositions have very similar vector representations as shown in TAB0 for preposition vectors trained using word2vec and GloVe. While this suggests that using available representations for prepositions diminishes the distinguishing feature between prepositions, one could hypothesize that this is primarily because standard word embedding algorithms treat prepositions no different from other content words such as verbs and nouns, i.e., embeddings are created based on co-occurrences with other words. However, prepositions are very frequent and co-occur with nearly all words, which means that their co-occurrence ought to be treated differently. Modern descriptive linguistic theory proposes to understand a preposition via its interactions with both the head (attachment) and child (complement) BID12; BID8. This theory naturally suggests that one should count co-occurrences of a given preposition with pairs of neighboring words. One way of achieving this would be by considering a tensor of triples (word 1, word 2, preposition), where we do not restrict word 1 and word 2 to be head-and child-words; instead we model a preposition's interaction with all pairs of neighboring words via a slice of a tensor X -the slice is populated by word co-occurrences restricted to a context window of the specific preposition. Thus, the tensor dimension is V × V × K where V is the vocabulary and K is the number of prepositions; since K ≈ 50, we note that V K.Using such a representation, we find that the ing tensor is low rank and extract embeddings for both preposition and non-preposition words using a combination of standard ideas from word representations (such as weighted spectral decomposition as in GloVe BID25) and tensor decompositions (alternating least squares (ALS) methods BID29 ). The preposition embeddings are discriminative, see preposition similarity of the tensor embedding in TAB0. We demonstrate that the ing representation for prepositions captures the core linguistic property of prepositions. We do this using both intrinsic evaluations and downstream tasks, where we provide new state-of-the-art on well-known NLP tasks involving prepositions. Intrinsic evaluations: We show that the Hadamard product of the embeddings of a verb and a preposition closely approximates the representation of this phrasal verb's paraphrase. Example: v made v from ≈ v produced where represents the Hadamard product (i.e., pointwise multiplication) of two vectors; this approximation does not hold for the standard word embeddings of prepositions (word2vec or GloVe). We provide a mathematical interpretation for this new geometry as well as empirically demonstrate the generalization on a new data set of compositional phrasal verbs. Extrinsic evaluations: Our preposition embeddings are used as features for a simple classifier in two well-known challenging downstream NLP classification tasks. In both tasks, we perform comparable to or strictly better than the state-of-the-art on multiple standardized datasets. Preposition selection: The choice of prepositions significantly influences (and is governed by) the semantics of the context they occur in. Furthermore, the prepositional choice is usually very subtle (and consequently is one of the most frequent error types made by second language English speakers BID19). This task tests the choice of a preposition in a large set of contexts (7, 000 instances of both CoNLL-2013 and SE datasets BID26). Our approach achieves 6% and 2% absolute improvement over the previous state-of-the-art on the respective datasets. Prepositional attachment disambiguation: Prepositional phrase attachment is a common cause of structural ambiguity in natural language. In the sentence "Pierre Vinken joined the board as a voting member", the prepositional phrase "as a voting member" can attach to either "joined" (the VP) or "the board" (the NP); in this case the VP attachment is correct. Despite extensive study over decades of research, prepositional attachment continues to be a major source of syntactic parsing errors BID4;; BID7. We use our prepositional representations as simple features to a standard classifier on this task. Our approach tested on a widely studied standard dataset BID2 achieves 89% accuracy, essentially the same performance as state-of-the-art (90% accuracy). It is noteworthy that while the stateof-the-art are obtained with significant linguistic resources, including syntactic parsers and WordNet, our approach does not rely on these resources to achieve a comparable performance. We emphasize two aspects of our contributions: It is folklore within the NLP community that representations via pairwise word counts capture much of the benefits of the unlabeled sentence-data; example: BID29 reports that their word representations via word-triple counts are better than others, but still significantly worse than regular word2vec representations. One of our main observations is that considering word-triple counts makes most (linguistic) sense when one of the words is a preposition. Furthermore, the sparsity of the corresponding tensor is no worse than the sparsity of the regular word co-occurrence matrix (since prepositions are so frequent and co-occur with essentially every word). Taken together, these two points strongly suggest the benefits of tensor representations in the context of prepositions. The word and preposition representations via tensor decomposition are simple features leading to a standard classifier. In particular, we do not use syntactic parsing (which many prior methods have relied on) or handcrafted features BID26 or train task-specific representations on the annotated training dataset BID2. The simplicity combined with our strong empirical (new state-of-the-art on long standing datasets) lends credence to the strength of the prepositional representations found via tensor decompositions. We begin with a description of how the tensor with triples (word,word,preposition) is formed and empirically show that its slices are low-rank. Next, we derive low dimensional vector representations for words and prepositions via appropriate tensor decomposition methods. Tensor creation: Suppose that K prepositions are in the preposition set P = {p 1, . . ., p K}; here K is 49 in our preposition selection task, and 76 in the attachment disambiguation task. The vocabulary, the set of all words except prepositions, contains N words V = {w 1, . . ., w N}, and N ≈ 1M. We generate a third order tensor X N ×N ×(K+1) from WikiCorpus Al- BID0 in the following way. We say two words co-occur if they appear within distance t of each other in a sentence. For k ≤ K, the entry X ijk is the number of occurrences where word w i co-occurs with preposition p k, and w j also co-occurs with preposition p k in the same sentence, and this is counted across all sentences in a large WikiCorpus. Here we use a window of size t = 3. There are also a number of words which do not occur in the context of any preposition. To make full use of the data, we add an extra slice X[:, :, K + 1]: the entry X ij(K+1) is the number of occurrences where w i co-occurs with w j (within distance 2t = 6) but at least one of them is not within a distance of t of any preposition. Note that the preposition window of 3 is smaller than the word window of 6, since it is known that the interaction between prepositions and neighboring words usually weakens more sharply with the distance as compared to content words BID11.Empirical properties of X: We find that the tensor X is very sparse -only 1% of tensor elements are non-zeros. Furthermore, every slice log(1 + X[:, :, k]) is low rank (here the logarithm is applied componentwise to every entry of the tensor slice). We choose slices corresponding to prepositions "about", "before","for", "in" and "of", and plot their normalized singular values in FIG0. We see that the singular values decay dramatically, suggesting the low rank structure in each slice. Tensor decomposition: We combine standard ideas from word embedding algorithms and tensor decomposition algorithms to arrive at the low rank approximation to the tensor log(1 + X). In particular, we consider two separate methods: A generic method to decompose the tensor into its modes is via the CANDECOMP/PARAFAC (CP) decomposition BID15. The tensor log(1 + X) is decomposed into three modes: U d×N, W d×N and Q d×(K+1), based on the solutions to the optimization problem. Here u i, w i and q i are the i-th column of U, W and Q, respectively. DISPLAYFORM0 where a, b, c = 1 t (a b c) is the inner product of three vectors a, b and c. Here 1 is the column vector of all ones and refers to the Hadamard product. We can interpret the columns of U as the word representations and the columns of Q as the preposition representations, each of dimension d (equal to 200 in this paper). There are several algorithmic solutions to this optimization problem in the literature, most of which are based on alternating least squares methods BID15 BID5 BID1 and we employ a recent one named Orth-ALS BID29 in this paper. Orth-ALS periodically orthogonalizes decomposed components while fixing two modes and updating the remaining one. It is supported by theoretical guarantees and empirically outperforms standard ALS method in different applications. DISPLAYFORM1 where b U i is the scalar bias for word i in matrix U. Similarly, b W j is the bias for word j in matrix W, and b Qk is for preposition k in matrix Q. Bias terms are learned to minimize the loss function. Here ω ijk is the weight assigned to each tensor element X ijk, and we use the weighting proposed byGloVe: ω ijk = min X ijk xmax α, 1. We set hyperparameters x max = 10, and α = 0.75 in this work. We solve this optimization problem via standard gradient descent, arriving at word representations U and tensor representations Q. Representation Interpretation Suppose that we have a phrase (h, p i, c) where h, p i and c are head word, preposition i(1 ≤ i ≤ K) and child respectively. A phrase example is split off something. The inner product of word vectors of h, p i and c reflects how frequently h and c cooccur in the context of p. It also reflects how cohesive the triple is. Recall that there is an extra (K + 1)−th slice that describes the word co-occurrences outside the preposition window, which considers cases such as the phrasal verb (v, c) where v and c are the verb and the child. The verb phrase divide something is equivalent to the phrase split off something. For any word c that fits in this phrase semantically, we can expect that DISPLAYFORM0 In other words u h q i ≈ u v q K+1, where a b denotes the pointwise multiplication (Hadamard product) of vectors a and b. This suggests that we could paraphrase the verb phrase (h, p i) by finding the verb v such that DISPLAYFORM1 Well-trained embeddings should be able to capture the relation between the prepositional phrases and their equivalent phrasal verbs. In TAB1, we list seven paraphrases of verb phrases, as generated from the weighted tensor decomposition. A detailed list of paraphrases on a new dataset of compositional verb phrases is available in TAB0 in Appendix B, where we also compare paraphrasing using regular word embeddings and via both addition and Hadamard product operations. The combination of tensor representations and Hadamard product in vastly superior paraphrasing. In the next two sections, we evaluate tensor-based preposition embeddings in the context of two important NLP downstream tasks: preposition selection and preposition attachment disambiguation. In this work, we use WikiCorpus as the training corpus for different sets of embeddings. We train tensor embeddings with both Orth-ALS and weighted decomposition. The implementation of Orth-ALS is built upon he SPLATT toolkit BID31. We perform orthogonalization in the first 5 iterations in Orth-ALS decomposition, and the training is completed when its performance stabilizes. As for the weighted decomposition, we train for 20 iterations, and its hyperparameters are set as x max = 10, and α = 0.75.We also include two baselines, word2vec's CBOW model and GloVe, for comparison. We set 20 training iterations to both models. Hyperparameters in word2vec are set as: window size=6, negative sampling=25 and down sampling=1e-4. Hyperparameters in GloVe are set as: window size=6, x max =10, α=0.75 and minimum word count=5. We note that all the representations in this studyword2vec, GloVe and our tensor embedding -are of dimension 200. The detection and correction of grammatical errors is an important task in NLP. Second language learners tend to make more mistakes and in particular, prepositional errors make up about 13% of all errors, ranking the second among most common error types BID19. This is due to the fact that prepositions are highly polysemous and have flexible usage. Accurate preposition selection needs to well capture the interaction between preposition and its context. This task is natural to evaluate how well the lexical interactions are captured by different methods. Task. Given a sentence in English with a preposition, we either replace the preposition (to the correct one) or retain it. For example, "to" should be corrected as "of" in the sentence "It can save the effort to carrying a lot of cards". Formally, there is a closed set of preposition candidates P = {p 1, . . ., p m}. A preposition p is used in a sentence s consisting of words s = {. . ., w −2, w −1, p, w 1, w 2, . . .}. If used incorrectly, we need to replace p by another prepositionp ∈ P based on the context. TAB2. We focus on the most frequent 49 prepositions listed in Appendix A.Evaluation metric. Three metrics, precision, recall and F1 score (harmonic mean of precision and recall) are used to evaluate preposition selection performance. Our algorithm. We first preprocess the dataset by removing articles, determiners and pronouns, and take a context window of 3. We divide the task into two steps: error identification and error correction. Firstly, we decide whether a preposition is used correctly in the context. If not, we suggest another preposition as replacement in the second step. Identification step uses only three features: cosine similarity between the current preposition embedding and the average context embedding, rank of the preposition in terms of cosine similarity, and probability that this preposition is not changed in training corpus. We build a decision tree classifier with these three features and find that we can identify errors with 98% F1 score in the CoNLL dataset and 96% in the SE dataset. When it comes to error correction, we only focus on identified errors in the first stage. Suppose that the original preposition is q, and the candidate preposition is p.; Confusion probability: the probability that q is replaced by p in the training data. A two-layer feedforward neural network (FNN) with hidden sizes of 500 and 10 is trained with these features to score prepositions in each sentence. The one with the highest score is the suggested edit. Baseline. State-of-the-art on preposition selection uses n-gram statistics from a large corpus BID26. Features such as pointwise mutual information (PMI) and part-of-speech tags are fed into a supervised scoring system. Prepositions with highest score are chosen as suggested ones. The performance is affected by both the system architecture and features. To evaluate the benefits brought by our tensor embedding-based features, we also consider other baselines which have the same two-step architecture whereas features are generated from word2vec and GloVe embeddings. These baselines allow us to compare the representation power independent of the classifier. Result. We compare our methods against baselines mentioned above in TAB3. As is seen, tensor embeddings achieve the best performance among all approaches. In particular, tensor with weighted decomposition has the highest F1 score on CoNLL dataset, 6% improvement over the state of the art. The tensor with ALS decomposition performs the best on SE dataset, achieving 2% improvement. We also note that with the same architecture, tensor embeddings perform much better than word2vec and GloVe embeddings on both datasets. It validates the representation power of tensor embeddings. To have a deeper insight into feature importance in the preposition selection task, we also perform an ablation analysis of the tensor method with weighted decomposition as shown in TAB4. We remove one feature each time, and report the performance achieved by remaining features. It is found that left context is the most important feature in CoNLL dataset, whereas confusion score is the most important in SE dataset. Pair similarity and triple similarity are less important compared with other features. This is because the neural network could learn lexical similarity from embedding features, and diminishes the importance of similarity features. Discussion. Now we analyze the reasons why our approach selects wrong prepositions in some sentences. Limited context window. We focus on the local context within preposition's window. In some cases, we find that head words might be out of the context window. In the sentence "prevent more of this kind of tragedy to happening" where to should be corrected as from. Given the context window of 3, we cannot get the lexical clues provided by prevent, which leads to the selection error in our approach. Preposition selection requires more context. Even when the context window contains all words on which the preposition depends, it still may not be sufficient to select the right preposition. For example, in the sentence "it is controlled by bad men in a not good purpose" where our approach replaces the preposition in with the preposition on given the high frequency of the phrase "on purpose". The correct preposition should be for based on the whole sentence. In this section, we discuss prepositional phrase (PP) attachment disambiguation, a well-studied, but still open, hard task in syntactic parsing. A prepositional phrase usually consists of head words, a preposition and child words. An example is "he saw an elephant with long tusks", where "with" is attached to the noun "elephant". In another example "he saw an elephant with his telescope", "with" is attached to the verb "saw". Head words can be different when only child word is changed. PP attachment disambiguation inherently requires accurate description of interactions among head, preposition and child, which becomes an ideal task to evaluate our tensor-based embeddings. Task. The English dataset used in our work is collected from a linguistic treebank by BID2. TAB5 enumerates statistics associated with this dataset. Each instance consists of several head candidates, a preposition and a child word. We need to pick the head to which the preposition is attached. In the examples above, words "saw" and "elephant" are head candidates. Our algorithm. Let v h, v p and v c be embeddings for the head candidate h, preposition p and child c respectively. Features we use for the attachment disambiguation are: embedding feature: candidate, preposition and child embedding; triple similarity: triple sim(h, p, c) = DISPLAYFORM0; part-of-speech (pos) tag of the candidate and its next word; (b) distance between h and p. We use a basic neural network, a two-layer feedforward network (FNN) with hidden sizes of 1000 and 20 to take input features and predict the probability that a candidate is the head. The candidate with the highest likelihood is chosen as the head. Baseline. We include following state-of-the-art approaches in preposition attachment disambiguation. The linguistic resources they used to enrich features are listed in Table 7. Head-Prep-Child-Dist (HPCD) Model BID2: this compositional neural network is used to train task-specific word representations. Low-Rank Feature Representation (LRFR) BID33: this method incorporates word parts, contexts and labels into a tensor, and uses decomposed vectors as features for disambiguation. Ontology LSTM (OntoLSTM) BID6: Word vectors are initialized with GloVeextended from AutoExtend BID27, and then trained via LSTMs for head selection. Similar to the experiments in preposition selection, we also include baselines which have the same feedforward network architecture but generate features with vectors trained by word2vec and GloVe. They are denoted as FNN with different initializations in Table 7. Since the attachment disambiguation is a selection task, accuracy is a natural evaluation metric. Result. We compare and linguistic resources of different approaches in Table 7, where we see that our simple classifier built on the tensor representations is within 1% of the state of the art; prior state of the art have used significant linguistic resources enumerated in Table 7. With the same feedforward neural network as the classifier, our tensor-based approaches (both ALS and WD) achieve better performance than word2vec and GloVe. Ablation analysis in TAB6 shows that head vector feature affects the performance most (indicating that heads interact more closely with prepositions), and POS tag comes second. Similarity features appear less important since the classifier has access to lexical relatedness via the embedding features. Distance feature is reported to be important in previous works since 81.7% sentences take the word closest to the preposition as their head. In our experiments, distance becomes less important compared with embedding features. Discussion. We find that one source of attachment disambiguation error is the lack of broader context in our features. Broader context is critical in examples such as "worked" and "system" which are head candidates of "for trades" in a sentence. They are reasonable heads in expressions "worked for trades" and "system for trades". It requires more context to decide that "system" rather than "worked" is the head in the given sentence. We further explore the difference in identifying head verbs and head nouns. We have found that tensor's geometry could aid in paraphrasing verb phrases, and thus it well captures the interaction between verbs and prepositions. In this task, we want to see whether our approach could do better in identifying head verbs than head nouns. There are 883 instances with head verbs on which we could achieve an accuracy of 0.897, and 1068 instances with head nouns where the accuracy is 0.887. We do better in selecting head verbs, but performance does not differ too much across verbs and nouns. Tensor Decomposition. Tensors embed higher order interaction among different modes, and the tensor decomposition captures the relations via lower dimensional representations. There are several decomposition methods such as Alternating Least Square (ALS) BID15, Simultaneous Diagonalization (SD) BID16 and optimization-based methods BID20 BID23. Orthogonalized Alternating Least Square (Orth-ALS) adds the step of component orthogonalization to each update step in the ALS method BID29. Orth-ALS, supported by theoretical guarantees and, more relevantly, good empirical performance, is the algorithm of choice in this paper. Preposition Selection. Preposition selection, a major area of study in both syntactic and semantic computational linguistics, is also a very practical topic in the context of grammar correction and second language learning. Prior works typically use hand-crafted heuristic rules in preposition correction BID32; lexical n-gram features are also known to be very useful BID26; BID28. Syntactic information such as POS tags and dependency parsing can further enrich features BID13, and are standard in generic tasks involving prepositions. Prepositional Attachment Disambiguation. There is a storied literature on prepositional attachment disambiguation, long recognized as an important part of syntactic parsing BID14. Recent works, based on word embeddings have pushed the boundary of state of the art empirical . A seminal work in this direction is the Head-Prep-Child-Dist (HPCD) Model, which trained word embeddings in a compositional network designed to maximize the accuracy of head prediction BID2. A very recent work has proposed an initialization with semantics-enriched GloVe embeddings, and retrained representations with LSTM-RNNs BID6. Another recent work has used tensor decompositions to capture the relation between word representations and their labels BID33. Co-occurrence counts of word pairs in sentences and the ing word vector representations (embeddings) have revolutionalized NLP research. A natural generalization is to consider co-occurrence counts of word triples, ing in a third order tensor. Partly due to the size of the tensor (a vocabulary of 1M, leads to a tensor with 10 18 entries!) and partly due to the extreme dynamic range of entries (including sparsity), word vector representations via tensor decompositions have largely been inferior to their lower order cousins (i.e., regular word embeddings).In this work, we trek this well-trodden terrain but restricting word triples to the scenario when one of the words is a preposition. This is linguistically justified, since prepositions are understood to model interactions between pairs of words. Numerically, this is also very well justified since the sparsity and dynamic range of the ing tensor is no worse than the original matrix of pairwise co-occurrence counts; this is because prepositions are very frequent and co-occur with essentially every word in the vocabulary. Our intrinsic evaluations and new state of the art in downstream evaluations lend strong credence to the tensor-based approach to prepositional representation. We expect our vector representations of prepositions to be widely used in more complicated downstream NLP tasks where prepositional role is crucial, including "text to programs" BID10. The list of most frequent 49 Prepositions in the task of preposition selection is shown below: about, above, absent, across, after, against, along, alongside, amid, among, amongst, around, at, before, behind, below, beneath, beside, besides, between, beyond, but, by, despite, during, except, for, from, in, inside, into, of, off, on, onto, opposite, outside, over, since, than, through, to, toward, towards, under, underneath, until, upon, with. B PARAPHRASING OF PHRASAL VERBS In Section 3 we have provided a simple linear algebraic method to generate paraphrases to compositional phrasal verbs. We approximate the paraphrase representation u v via Eq. 3, and get a list of words which have similar representations as candidate paraphrases. These candidates do not include words that are the same as component words in the phrase. We also require that a reasonable paraphrase should be a verb. Therefore we choose the verb which is most similar to u v among candidates. We filter verbs with Python NLTK tools BID3 and Linguistics library of.Sample examples of the top paraphrases are provided in TAB1. Here we provide a detailed enumeration of the of our linear algebraic method on a new dataset of 60 compositional phrases. In the paraphrasing task, we consider three sets of embeddings, word2vec, GloVe and tensor embeddings from weighted decomposition. We also have two composition methods: addition and Hadamard product to approximate the paraphrase representation from verb and preposition vectors. Addition is included here because it has been widely used to approximate phrasal embedding in previous works BID22 BID9. We enumerate paraphrases generated by six combinations of embeddings and composition methods, validating the representation power of tensor embeddings and the multiplication (Hadamard product) composition method. As we can see from TAB7 and 10, tensor embedding works better with multiplicative composition, whereas word2vec and GloVe work better with additive composition. Overall, tensor embedding together with multiplication gives better paraphrases than other approaches.
This work is about tensor-based method for preposition representation training.
965
scitldr
A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling. Reversible generative models use cheaply invertible neural networks to transform samples from a fixed base distribution. Examples include NICE BID3, Real NVP BID4, and Glow BID12. These models are easy to sample from, and can be trained by maximum likelihood using the change of variables formula. However, this requires placing awkward restrictions on their architectures, such as partitioning dimensions or using rank one weight matrices, in order to avoid an O(D 3) cost determinant computation. In contrast to directly parameterizing a normalized distribution (e.g. BID17 ; BID5), the change of variables formula allows one to specify a complex normalized distribution p x (x) implicitly by warping a normalized base distribution p z (z) through an invertible function f: R D → R D. Given a random variable z ∼ p z (z) the log density of x = f (z) follows log p x (x) = log p z (z) − log det ∂f (z) ∂zwhere ∂f (z) /∂z is the Jacobian of f. In general, computing the log determinant has a time cost of O(D 3). Much work has gone into developing restricted neural network architectures which make computing the Jacobian's determinant more tractable. These approaches broadly fall into three categories:Normalizing flows. By restricting the functional form of f, various determinant identities can be exploited . These models cannot be trained as generative models from data because they do not have a tractable inverse f −1. However, they are useful for specifying approximate posteriors for variational inference BID13. Autoregressive transformations. By using an autoregressive model and specifying an ordering of the dimensions, the Jacobian of f is enforced to be lower triangular BID14 BID16. These models excel at density estimation for tabular datasets , but require D sequential evaluations of f to invert, which is prohibitive when D is large. Partitioned transformations. Partitioning the dimensions and using affine transformations makes the determinant of the Jacobian cheap to compute, and the inverse f −1 computable with the same cost as f BID3 BID4. This method allows the use of convolutional architectures, excelling at density estimation for image data BID4 BID12.Throughout this work, we refer to reversible generative models as those which use the change of variables to transform a base distribution to the model distribution while maintaining both efficient density estimation and efficient sampling capabilities using a single pass of the model. There exist several approaches to generative modeling approaches which do not use the change of variables equation for training. Generative adversarial networks (GANs) BID6 use large, unrestricted neural networks to transform samples from a fixed base distribution. Lacking a closed-form likelihood, an auxiliary discriminator model must be trained to estimate divergences or density ratios in order to provide a training signal. Autoregressive models BID5 BID17 directly specify the joint distribution p(x) as a sequence of explicit conditional distributions using the product rule. These models require at least O(D) evaluations to sample from. Variational autoencoders (VAEs) BID13 use an unrestricted architecture to explicitly specify the conditional likelihood p(x|z), but can only efficiently provide a stochastic lower bound on the marginal likelihood p(x). define a generative model for data x ∈ R D similar to those based on, but replace the warping function with an integral of continuous-time dynamics. The generative process first samples from a base distribution z 0 ∼ p z0 (z 0). Then, given an ODE whose dynamics are defined by the parametric function ∂z(t) /∂t = f (z(t), t; θ), we solve the initial value problem with z(t 0) = z 0 to obtain a data sample x = z(t 1). These models are called Continous Normalizing Flows (CNF). The change in log-density under this model follows a second differential equation, called the instantaneous change of variables formula BID2: DISPLAYFORM0 We can compute total change in log-density by integrating across time: Given a datapoint x, we can compute both the point z 0 which generates x, as well as log p(x) under the model by solving the combined initial value problem: DISPLAYFORM1 DISPLAYFORM2 which integrates the combined dynamics of z(t) and the log-density of the sample backwards in time from t 1 to t 0. We can then compute log p(x) using the solution of and adding log p z0 (z 0). The existence and uniqueness of require that f and its first derivatives be Lipschitz continuous BID10, which can be satisfied in practice using neural networks with smooth Lipschitz activations, such as softplus or tanh. CNFs are trained to maximize. This objective involves the solution to an initial value problem with dynamics parameterized by θ. For any scalar loss function which operates on the solution to an initial value problem DISPLAYFORM0 then shows that its derivative takes the form of another initial value problem DISPLAYFORM1 The quantity − ∂L /∂z(t) is known as the adjoint state of the ODE. BID2 use a black-box ODE solver to compute z(t 1), and then a separate call to a solver to compute with the initial value ∂L /∂z(t1). This approach is a continuous-time analog to the backpropgation algorithm (; BID1 and can be combined with gradient-based optimization to fit the parameters θ by maximum likelihood. Switching from discrete-time dynamics to continuous-time dynamics reduces the primary computational bottleneck of normalizing flows from DISPLAYFORM0, at the cost of introducing a numerical ODE solver. This allows the use of more expressive architectures. For example, each layer of the original normalizing flows model of is a one-layer neural network with only a single hidden unit. In contrast, the instantaneous transformation used in planar continuous normalizing flows BID2) is a one-layer neural network with many hidden units. In this section, we construct an unbiased estimate of the log-density with O(D) cost, allowing completely unrestricted neural network architectures to be used. In general, computing Tr (∂f /∂z(t)) exactly costs O(D 2), or approximately the same cost as D evaluations of f, since each entry of the diagonal of the Jacobian requires computing a separate derivative of f BID7. However, there are two tricks that can help. First, vector-Jacobian products v T ∂f ∂z can be computed for approximately the same cost as evaluating f using reverse-mode automatic differentiation. Second, we can get an unbiased estimate of the trace of a matrix by taking a double product of that matrix with a noise vector: DISPLAYFORM0 The above equation holds for any D-by-D matrix A and distribution p over D-dimensional vectors such that E[] = 0 and Cov = I. The Monte Carlo estimator derived from FORMULA7 is known as Hutchinson's trace estimator BID9 BID0.To keep the dynamics deterministic within each call to the ODE solver, we can use a fixed noise vector for the duration of each solve without introducing bias: DISPLAYFORM1 Typical choices of p are a standard Gaussian or Rademacher distribution BID9. Often, there exist bottlenecks in the architecture of the dynamics network, i.e. hidden layers whose width H is smaller than the dimensions of the input D. In such cases, we can reduce the variance of Hutchinson's estimator by using the cyclic property of trace. Since the variance of the estimator for Tr(A) grows asymptotic to ||A|| DISPLAYFORM0 When f has multiple hidden layers, we choose H to be the smallest dimension. This bottleneck trick can reduce the norm of the matrix which may also help reduce the variance of the trace estimator. As introducing a bottleneck limits our model capacity, we do not use this trick in our experiments. However this trick can reduce variance when a bottleneck is used, as shown in our ablation studies. Our complete method uses the dynamics defined in and the efficient log-likelihood estimator of to produce the first scalable and reversible generative model with an unconstrained Jacobian. We call this method Free-Form Jacobian of Reversible Dyanamics (FFJORD). Pseudo-code of our method is given in Algorithm 1, and TAB1 summarizes the capabilities of our model compared to other recent generative modeling approaches. Assuming the cost of evaluating f is on the order of O(DH) where D is the dimensionality of the data and H is the size of the largest hidden layer in f, then the cost of computing the likelihood in models with repeated use of invertible transformations FORMULA0 is DISPLAYFORM0 where L is the number of transformations used. For CNF, this reduces to O((DH + D 2)L) for CNFs, whereL is the number of evaluations of f used by the ODE solver. With FFJORD, this reduces further to DISPLAYFORM1 Algorithm 1 Unbiased stochastic log-density estimation using the FFJORD model Require: dynamics f θ, start time t 0, stop time t 1, data samples x, data dimension D.← sample unit variance(x.shape) Sample outside of the integral DISPLAYFORM2 Augment f with log-density dynamics. DISPLAYFORM3 Compute vector-Jacobian product with automatic differentiation DISPLAYFORM4 Figure 2: Comparison of trained Glow, planar CNF, and FFJORD models on 2-dimensional distributions, including multi-modal and discontinuous densities. We demonstrate FFJORD on a variety of density estimation tasks, and for approximate inference in variational autoencoders BID13. Experiments were conducted using a suite of GPU-based ODE-solvers and an implementation of the adjoint method for backpropagation 1. In all experiments the RungeKutta 4 algorithm with the tableau from was used to solve the ODEs. We ensure tolerance is set low enough so numerical error is negligible; see Appendix C.We used Hutchinson's trace estimator during training and the exact trace when reporting test . This was done in all experiments except for our density estimation models trained on MNIST and CIFAR10 where computing the exact Jacobian trace was too expensive. The dynamics of FFJORD are defined by a neural network f which takes as input the current state z(t) ∈ R D and the current time t ∈ R. We experimented with several ways to incorporate t as an input to f, such as hyper-networks, but found that simply concatenating t on to z(t) at the input to every layer worked well and was used in all of our experiments. We first train on 2 dimensional data to visualize the model and the learned dynamics.2 In FIG1, we show that by warping a simple isotropic Gaussian, FFJORD can fit both multi-modal and even discontinuous distributions. The number of evaluations of the ODE solver is roughly 70-100 on all datasets, so we compare against a Glow model with 100 discrete layers. The learned distributions of both FFJORD and Glow can be seen in FIG1. Interestingly, we find that Glow learns to stretch the unimodal base distribution into multiple modes but has trouble modeling the areas of low probability between disconnected regions. In contrast, FFJORD is capable of modeling disconnected modes and can also learn convincing approximations of discontinuous density functions (middle row in FIG1). Since the main benefit of FFJORD is the ability to train with deeper dynamics networks, we also compare against planar CNF BID2 BID8, cannot be sampled from without resorting to correlated or expensive sampling algorithms such as MCMC.On MNIST we find that FFJORD can model the data as effectively as Glow and Real NVP using only a single flow defined by a single neural network. This is in contrast to Glow and Real NVP which must compose many flows to achieve similar performance. When we use multiple flows in a multiscale architecture (like those used by Glow and Real NVP) we obtain better performance on MNIST and comparable performance to Glow on CIFAR10. Notably, FFJORD is able to achieve this performance while using less than 2% as many parameters as Glow. We also note that Glow uses a learned base distribution whereas FFJORD and Real NVP use a fixed Gaussian. A summary of our on density estimation can be found in TAB4 and samples can be seen in Figure 3. Full details on architectures used, our experimental procedure, and additional samples can be found in Appendix B.1.In general, our approach is slower than competing methods, but we find the memory-efficiency of the adjoint method allows us to use much larger batch sizes than those methods. On the tabular datasets we used a batch sizes up to 10,000 and on the image datasets we used a batch size of 900. We compare FFJORD to other normalizing flows for use in variational inference. In VAEs it is common for the encoder network to also output the parameters of the flow as a function of the input x. With FFJORD, we found this led to differential equations which were too difficult to integrate numerically. Instead, the encoder network outputs a low-rank update to a global weight matrix and an input-dependent bias vector. When used in recognition nets, neural network layers defining the dynamics inside FFJORD take the form DISPLAYFORM0 where h is the input to the layer, σ is an element-wise activation function, D in and D out are the input and output dimension of this layer, andÛ (x),V (x),b(x) are input-dependent parameters returned from an encoder network. A full description of the model architectures used and our experimental setup can be found in Appendix B.2.On every dataset tested, FFJORD outperforms all other competing normalizing flows. A summary of our variational inference can be found in TAB6. We performed a series of ablation experiments to gain a better understanding of the proposed model. We plotted the training losses on MNIST using an encoder-decoder architecture (see Appendix B.1 for details). Loss during training is plotted in FIG2, where we use the trace estimator directly on the D×D Jacobian, or we use the bottleneck trick to reduce the dimension to H × H. Interestingly, we find that while the bottleneck trick can lead to faster convergence when the trace is estimated using a Gaussian-distributed, we did not observe faster convergence when using a Rademacherdistributed. The full computational cost of integrating the instantaneous change of variables FORMULA1 is O(DH L) where D is dimensionality of the data, H is the size of the hidden state, and L is the number of function evaluations (NFE) that the adaptive solver uses to integrate the ODE. In general, each evaluation of the model is O(DH) and in practice, H is typically chosen to be close to D. Since the general form of the discrete change of variables equation FORMULA0 Figure 5: NFE used by the adaptive ODE solver is approximately independent of data-dimension. Lines are smoothed using a Gaussian filter. We train VAEs using FFJORD flows with increasing latent dimension D. The NFE throughout training is shown in Figure 5. In all models, we find that the NFE increases throughout training, but converges to the same value, independent of D. We conjecture that the number of evaluations is not dependent on the dimensionality of the data but the complexity of its distribution, or more specifically, how difficult it is to transform its density into the base distribution. 200 400 600 800 NFE 0.9 DISPLAYFORM0 Bits/dimSingle FFJORD Multiscale FFJORD Figure 6: For image data, a single FFJORD flow can achieve near performance to multi-scale architecture while using half the number of evaluations. Crucial to the scalability of Real NVP and Glow is the multiscale architecture originally proposed in BID4. We compare a single-scale encoder-decoder style FFJORD with a multiscale FFJORD on the MNIST dataset where both models have a comparable number of parameters and plot the total NFE-in both forward and backward passes-against the loss achieved in Figure 6. We find that while the single-scale model uses approximately one half as many function evaluations as the multiscale model, it is not able to achieve the same performance as the multiscale model. Number of function evaluations can be prohibitive. The number of function evaluations required to integrate the dynamics is not fixed ahead of time, and is a function of the data, model architecture, and model parameters. This number tends to grow as the models trains and can become prohibitively large, even when memory stays constant due to the adjoint method. Various forms of regularization such as weight decay and spectral normalization BID15 can be used to reduce the this quantity, but their use tends to hurt performance slightly. Limitations of general-purpose ODE solvers. In theory, our model can approximate any differential equation (given mild assumptions based on existence and uniqueness of the solution), but in practice our reliance on general-purpose ODE solvers restricts us to non-stiff differential equations that can be efficiently solved. ODE solvers for stiff dynamics exist, but they evaluate f many more times to achieve the same error. We find that a small amount of weight decay regularizes the ODE to be sufficiently non-stiff. We have presented FFJORD, a reversible generative model for high-dimensional data which can compute exact log-likelihoods and can be sampled from efficiently. Our model uses continuoustime dynamics to produce a generative model which is parameterized by an unrestricted neural network. All required quantities for training and sampling can be computed using automatic differentiation, Hutchinson's trace estimator, and black-box ODE solvers. Our model stands in contrast to other methods with similar properties which rely on restricted, hand-engineered neural network architectures. We demonstrated that this additional flexibility allows our approach to achieve on-par or improved performance on density estimation and variational inference. We believe there is much room for further work exploring and improving this method. FFJORD is empirically slower to evaluate than other reversible models like Real NVP or Glow, so we are interested specifically in ways to reduce the number of function evaluations used by the ODE-solver without hurting predictive performance. Advancements like these will be crucial in scaling this method to even higher-dimensional datasets. We thank Yulia Rubanova and Roger Grosse for helpful discussions. Samples from our FFJORD models trained on MNIST and CIFAR10 can be found in Figure 7. Figure 7: Samples and data from our image models. MNIST on left, CIFAR10 on right. On the tabular datasets we performed a grid-search over network architectures. We searched over models with 1, 2, 5, or 10 flows with 1, 2, 3, or 4 hidden layers per flow. Since each dataset has a different number of dimensions, we searched over hidden dimensions equal to 5, 10, or 20 times the data dimension (hidden dimension multiplier in TAB10). We tried both the tanh and softplus nonlinearities. The best performing models can be found in the TAB10.On the image datasets we experimented with two different model architectures; a single flow with an encoder-decoder style architecture and a multiscale architecture composed of multiple flows. While they were able to fit MNIST and obtain competitive performance, the encoder-decoder architectures were unable to fit more complicated image datasets such as CIFAR10 and Street View House Numbers. The architecture for MNIST which obtained the in TAB4 was composed of four convolutional layers with 64 → 64 → 128 → 128 filters and down-sampling with strided convolutions by two every other layer. There are then four transpose-convolutional layers who's filters mirror the first four layers and up-sample by two every other layer. The softplus activation function is used in every layer. The multiscale architectures were inspired by those presented in BID4. We compose multiple flows together interspersed with "squeeze" operations which down-sample the spatial resolution of the images and increase the number of channels. These operations are stacked into a "scale block" which contains N flows, a squeeze, then N flows. For MNIST we use 3 scale blocks and for CIFAR10 we use 4 scale blocks and let N = 2 for both datasets. Each flow is defined by 3 convolutional layers with 64 filters and a kernel size of 3. The softplus nonlinearity is used in all layers. Both models were trained with the Adam optimizer BID11. We trained for 500 epochs with a learning rate of.001 which was decayed to.0001 after 250 epochs. Training took place on six GPUs and completed after approximately five days. Our experimental procedure exactly mirrors that of. We use the same 7-layer encoder and decoder, learning rate (.001), optimizer , batch size, and early stopping procedure (stop after 100 epochs of no validaiton improvment). The only difference was in the nomralizing flow used in the approximate posterior. We performed a grid-search over neural network architectures for the dynamics of FFJORD. We searched over networks with 1 and 2 hidden layers and hidden dimension 512, 1024, and 2048. We used flows with 1, 2, or 5 steps and wight matrix updates of rank 1, 20, and 64. We use the softplus activation function for all datasets except for Caltech Silhouettes where we used tanh. The best performing models can be found in the TAB11. Models were trained on a single GPU and training took between four hours and three days depending on the dataset. Table 6: Negative log-likehood on test data for density estimation models. Means/stdev over 3 runs. Real NVP, MADE, MAF, TAN, and MAF-DDSF on are taken from BID8. In reproducing Glow, we were able to get comparable to the reported Real NVP by removing the invertible fully connected layers. ODE solvers are numerical integration methods so there is error inherent in their outputs. Adaptive solvers (like those used in all of our experiments) attempt to predict the errors that they accrue and modify their step-size to reduce their error below a user set tolerance. It is important to be aware of this error when we use these solvers for density estimation as the solver outputs the density that we report and compare with other methods. When tolerance is too low, we run into machine precision errors. Similarly when tolerance is too high, errors are large, our training objective becomes biased and we can run into divergent training dynamics. Since a valid probability density function integrates to one, we take a model trained on FIG0 and numerically find the area under the curve using Riemann sum and a very fine grid. We do this for a range of tolerance values and show the ing error in FIG3. We set both atol and rtol to the same tolerance. The numerical error follows the same order as the tolerance, as expected. During training, we find that the error becomes non-negligible when using tolerance values higher than 10 −5. For most of our experiments, we set tolerance to 10 −5 as that gives reasonable performance while requiring few number of evaluations. For the tabular experiments, we use atol=10 −8 and rtol=10 −6.
We use continuous time dynamics to define a generative model with exact likelihoods and efficient sampling that is parameterized by unrestricted neural networks.
966
scitldr
We propose a non-adversarial feature matching-based approach to train generative models. Our approach, Generative Feature Matching Networks (GFMN), leverages pretrained neural networks such as autoencoders and ConvNet classifiers to perform feature extraction. We perform an extensive number of experiments with different challenging datasets, including ImageNet. Our experimental demonstrate that, due to the expressiveness of the features from pretrained ImageNet classifiers, even by just matching first order statistics, our approach can achieve state-of-the-art for challenging benchmarks such as CIFAR10 and STL10. One of the key research focus in unsupervised learning is the training of generative methods that can model the observed data distribution. Good progress has been made in recent years with the advent of new approaches such as generative adversarial networks (GANs) BID10 and variational autoencoders (VAE) BID19 which use deep neural networks as building blocks. Both methods have advantages and disadvantages, and a significant number of recent works focus on addressing their issues BID20 BID5. While the main disadvantage of VAEs is the generation of blurred images, the main issue with GANs is the training instability due to the adversarial learning. Feature matching has been explored to improve the stability of GANs BID41. The key idea in feature matching GANs (FM-GANs) is to use the discriminator network as a feature extractor, and guide the generator to generate data that matches the feature statistics of the real data. Concretely, the objective function of the generator in FM-GAN consists in minimizing the mean squared error of the average features of a minibatch of generated data and a minibatch of real data. The features are extracted from one single layer of the discriminator. FM-GAN is somewhat similar to methods that use maximum mean discrepancy (MMD) BID11 BID12. However, while in FM-GAN the objective is to match the mean of the extracted features, in MMD-based generative models BID9, one normally aims to match all the moments of the two distributions using a Gaussian kernel. Although MMD-based generative models have strong theoretical guarantees, these models normally perform much worse than GANs on challenging benchmarks BID23.In this work, we focus on answering the following research question: can we train effective generative models by performing feature matching on features extracted from a pretrained neural networks? In other words, we would like to know if adversarial training of the feature extractor together with the generator is a requirement for training effective generators. Towards answering this question, we propose Generative Feature Matching Networks (GFMN), a new feature matching-based approach to train generative models that uses features from pretrained neural networks, breaking away from the problematic min/max game completely. Some interesting properties of the proposed method include: the loss function is directly correlated to the generated image quality; mode collapsing is not an issue; the same pretrained feature extractor can be used across different datasets; and both supervised (classifiers) and unsupervised (autoencoder) models can be used as feature extractors. We perform an extensive number of experiments with different challenging datasets, including ILSVRC2012 (henceforth Imagenet) BID37. We demonstrate that, due to the expressiveness of the features from pretrained Imagenet classifiers, even by just matching first order statistics, our approach can achieve state-of-the-art for challenging benchmarks such as CIFAR10 and STL10. Moreover, we show that the same feature extractor is effective across different datasets. The main contributions of this work can be summarized as follows: We propose a new effective feature matching-based approach to train generative models that does not use adversarial learning, have stable training and achieves state-of-the-art ; We propose an ADAM-based moving average method that allows effective training with small minibatches; Our extensive quantitative and qualitative experimental demonstrate that pretrained autoencoders and deep convolutional net (DCNN) classifiers can be effectively used as feature extractors for the purpose of learning generative models. Let G be the generator implemented as a neural network with parameters θ, and let E be a pretrained neural network with L hidden layers. Our proposed approach consists of training G by minimizing the following loss function: DISPLAYFORM0 where: ||.|| 2 is the L 2 loss; x is a real data point sampled from the data generating distribution p data; z ∈ R nz is a noise vector sampled from the normal distribution N (0, I nz); E j (x), denotes the output vector/feature map of the hidden layer j from E; M ≤ L is the number of hidden layers used to perform feature matching. In practice, we train G by sampling mini-batches of true data and generated (fake) data and optimizing the parameters θ using stochastic gradient descent (SGD) with backpropagation. The network E is used for the purpose of feature extraction only and is kept fixed during the training of G. A natural choice of unsupervised method to train a feature extractor is the autoencoder framework. The decoder part of an AE consists exactly of an image generator that uses features extracted by the encoder. Therefore, by design, the encoder network should be a good feature extractor for the purpose of generation. Let E and D be the encoder and the decoder networks with parameters φ and ψ, respectively. We pretrain the autoencoder using mean squared error (MSE): DISPLAYFORM0 or the Laplacian pyramid loss BID25: DISPLAYFORM1 where L j (x) is the j-th level of the Laplacian pyramid representation of x. The Laplacian pyramid loss provides better signal for learning the high frequencies of the images and overcome some of the known issues of the blurry images that one would get with a simple MSE loss. recently demonstrated that the Lap 1 loss produces better than L 2 loss for both autoencoders and generative models. Another attractive feature of the autoencoder framework is that the decoder network can be used to initialize the parameters of the generator, which can make the training of the generator easier by starting in a region closer to the data manifold. We use this option in our experiments and show that it leads to significantly better . Different past work has shown the usefulness and power of the features extracted from DCNNs pretrained on classification tasks BID42. In particular, features from DCNNs pretrained on ImageNet BID37 have demonstrated an incredible value for a different number of tasks. In this work, we perform experiments where we use different DCCNs pretrained on ImageNet to play the role of the feature extractor E. Our hypothesis is that ImageNet-based features are powerful enough to allow the successful training of (cross-domain) generators by feature matching. From feature matching loss to moving averages. In order to train with the (mean) feature matching loss, one would need large mini-batches for generating a good estimate of the mean features. When using images larger than 32×32 and DCNNs that produce millions of features, this can easily in memory issues. To alleviate this problem we propose to use moving averages of the difference of means of real and generated data. Instead of computing the (memory) expensive feature matching loss in Eq. 1, we propose to keep moving averages v j of the difference of feature means at layer j between real and generated data so that DISPLAYFORM0 where N is the minibatch size. Using these moving averages we replace the loss given in Eq. 1 by DISPLAYFORM1 where v j is a moving average on ∆ j, the difference of the means of the features extracted by the j-th layer of E: DISPLAYFORM2 The moving average formulation of features matching given in Eq. 2, gives a major advantage on the naive formulation of Eq. 1, since we can now rely on v j to get a better estimate of the population feature means of real and generated data while using a small minibatch of size N. In order to obtain a similar using the feature matching loss given in Eq. 1, one would need a minibatch with a large size N, which becomes problematic as the number of features becomes large. ADAM moving average: from SGD to ADAM updates. Note that for a rate α, the moving average v j has the following update: DISPLAYFORM3 It is easy to see that the moving average is a gradient descent update on the following loss: DISPLAYFORM4 Hence, writing the gradient update with learning rate α we have equivalently: DISPLAYFORM5 With this interpretation of the moving average we propose to get a better estimate of the moving average using the ADAM optimizer BID18 on the loss of the moving average given in Eq. 4, such that DISPLAYFORM6 function is computed as follows: DISPLAYFORM7 where x is the gradient for the loss function in Eq. 4, t is the iteration number, m t is the first moment vector at iteration t, u t is the second moment vector at iteration t, β 1 =.9, β 2 =.999 and = 10 −8 are constants. m 0 and u 0 are initialized as proposed by BID18. We refer the reader to BID18 for a more detailed description of the ADAM optimizer. This moving average formulation, which we call ADAM Moving Average (AMA) promotes stable training when using small minibatches. The main advantage of AMA over simple moving average (MA) is in its adaptive first order and second order moments that ensures a stable estimation of the moving averages v j. In fact, this is a non stationary estimation since the mean of the generated data changes in the training, and it is well known that ADAM works well for such online and non stationary losses BID18.In Section 4.2.3 we provide experimental supporting: The memory advantage that the AMA formulation of feature matching offers over the naive implementation of feature matching; The stability advantage and improved generation that AMA allows when compared to the naive implementation of the simple MA. Features from DCNNs pretrained on ImageNet have been used frequently to perform transfer learning in many computer vision tasks BID16. Some previous work uses DCNN features in the context of image generation and transformation. BID7 combines feature based loss with adversarial loss to improve image quality of variational autoencoders (VAE) BID19. BID17 proposes a feature based loss that uses features from different layers of the VGG-16 neural network and is effective for image transformation task such as style transfer and super-resolution. BID17 confirms the findings of BID28 that the initial layers of the network are more related to content while the last layers are more related to style. Our proposed approach is closely related to the recent body of work on MMD-based generative models BID9 BID23 BID2 BID36. In fact, our method is a type of MMD where we (only) match the first moment of the transformed data. Among the approaches reported in the literature, the closest to our method is the Generative Moment Matching Network + Autoencoder (GMMN+AE) proposed by. In GMMN+AE, the objective is to train a generator G that maps from a prior uniform distribution to the latent code learned by a pretrained AE. To generate a new image, one samples a noise vector z from the prior, maps it the AE latent space using G, then uses the (frozen) decoder to map from the AE latent space to the image space. One key difference in our approach is that our generator G maps from the z space directly to the data space, such as in GANs BID10. Additionally the dimensionality of the feature space that we use to perform distribution matching is orders of magnitude larger than the dimensionality of the latent code normally used in GMMN+AE. BID23 demonstrate that GMMN+AE is not competitive with GANs for challenging datasets such as CIFAR10. Recent MMD-based generative models have demonstrated state-of-the-art with the use of adversarial learning to train the MMD kernel as a replacement of the fixed Gaussian kernel in GMMN BID23 BID2. Additionally, BID36 recently proposed a method to perform online learning of the moments while training the generator. Our proposed method differs from these previous approaches where we use a frozen pretrained feature extractor to perform moment matching. proposed the Generative Latent Optimization (GLO) model, a generative approach that jointly optimizes the model parameters and the noise input vectors z. GLO models obtain competitive for CelebA and LSUN datasets without using adversarial training. also demonstrated that the Laplacian pyramid loss is an effective way to improve the performance of non-adversarial methods that use reconstruction loss. Our work relates also to plug and play generative models of BID33 where a pretrained classifier is used to sample new images, using MCMC sampling methods. Our work is also related to AE-based generative models variational autoencoder (VAE) BID19, adversarial autoencoder (AAE) BID29 and Wasserstein autoencoder (WAE) BID40. While in VAE and WAE a penalty is used to impose a prior distribution on the hidden code vector of the AE, in AAE an adversarial training procedure is used for that purpose. In our method, the aim is to get a generative model out of a pretrained autoencoder. We fix the pretrained encoder to be the discriminator in a GAN like setting. Another recent line of work that involves the use of AEs in generative models consists in applying AEs to improve GANs stability. BID43 proposed an energy based approach where the discriminator is replaced by an autoencoder. BID41 augments the training loss of the GAN generator by including a feature reconstruction loss term that is computed as the mean squared error of a set of features extracted by the discriminator and their reconstructed version. The reconstruction is performed using an AE trained on the features extracted by the discriminator for the real data. Datasets: We evaluate our proposed approach on MNIST (60k training, 10k test images, 10 classes), CIFAR10 BID21 ) (50k training, 10k test images, 10 classes), STL10 BID6 ) (5K training, 8k test images, 100k unlabeled, 10 classes), CelebA BID26 ) (200k images) and different portions of ImageNet BID37 datasets. MNIST and STL10 images are rescaled to 32×32, while CelebA and ImageNet images are rescaled to 64×64. CelebA images are center cropped to 160×160 before rescaling. In our experiments with all datasets but ImageNet, our generator G uses a DCGAN-like architecture. For CIFAR10, STL10 and CelebA, we use two extra layers as commonly used in previous works BID13. For ImageNet, we use a Resnet-based generator such as the one in BID30. More details about the architectures can be found in Appendix A.2.Autoencoder Features: For most AE experiments, we use an encoder network whose architecture is similar to the discriminator in DCGAN (strided convolutions). We use batch normalization and ReLU non-linearity after each convolution. We set the latent code size to 8, 128, 128, and 512 for MNIST, CIFAR10, STL10 and CelebA, respectively. To perform feature extraction, we get the output of each ReLU in the network as well as the output of the very last layer, the latent code. Additionally, we also perform some experiments where the encoder uses a VGG13 architecture. The decoder network D uses a network architecture similar to our generator G. More details in Appendix A.2.Classifier Features: We perform our experiments on classifier features with VGG19 BID39 and Resnet18 networks BID14 which we pretrained using the whole ImageNet dataset with 1000 classes. More details about the pretrained ImageNet classifiers can be found in Appendices A.2 and A.3. We train GFMN with an ADAM optimizer and keep most of the hyperparameters fixed for the different datasets. We use n z = 100 and minibatch size 64. When using autoencoder features, we set the learning rate to 5 × 10 −6 when G is initialized with D, and to 5 × 10 −5 when it is not. When using features from ImageNet classifiers, we set set the learning rate to 1 × 10 −4. We use ADAM moving average (Sec. 2.4) in all reported experiments. In this section, we present experimental on the use of pretrained encoders as feature extractors. The first two rows of in Tab. 1 show GFMN performance in terms of Inception Score (IS) and Fréchet Inception Distance (FID) BID15 for CIFAR10 in the case where the (DCGAN-like) encoder is used as feature extractor. The use of a pretrained decoder D to initialize the generator gives a significant boost in both IS and FID. A visual comparison that corroborates the quantitative can be found in Appendix A.5. In Figures 1a, 1b, and 1c, we present random samples generated by GFMN when trained with MNIST, CIFAR10 and CelebA datasets, respectively. For each dataset, we train its respective AE using the (unlabeled) training set. DISPLAYFORM0 Figure 1: Generated samples from GFMN using pretrained encoder as feature extractor. The last six rows in Tab. 1 present the IS and FID for our best configurations that use ImageNet pretrained VGG19 and Resnet18 as feature extractors. Yhere is a large boost in performance when ImageNet classifiers are used as feature extractors instead of autoencoders. Despite the classifiers being trained on data from a different domain (ImageNet vs. CIFAR10), the classifier features are significantly more effective. In all cases, the use of an initialized generator improves the . However, the improvements are much less significant when compared to the one obtained for the encoder feature extractor. We perform an additional experiment where we use simultaneously VGG19 and Resnet18 as feature extractors, which increases the number of features to 832K. This last configuration gives the best performance for both CIFAR10 and STL10. FIG0 show random samples from the GFMN VGG19+Resnet18 model, where no init. of the generator is used. In Tab. 2, we report IS and FID for increasing number of layers (i.e. number of features) in our extractors VGG19 and Resnet18. We select up to 16 layers for VGG19 and 17 layers for Resnet18, which means that we excluded the output of fully connected layers. Using more layers dramatically improves the performance of both feature extractors, reaching (IS) peak performance when the maximum number of layers is used. The in Tab. 1 are better than the ones in Tab. 2 because, for the former, we trained the models for a longer number of epochs. All models in Tab. 2 are trained for 391K generator updates, while VGG19 and Resnet18 models in Tab. 1 are trained for 1.17M updates (we use small learning rates). Note that for both feature extractors, the features are ReLU activation outputs. As a , the encodings may be quite sparse. Figs. 2d, 2e and 2f show generated images when 1, 3, and 9 layers are used for feature matching, respectively (more in Appendix A.7).In order to check if the number of features is the main factor for the performance, we performed an experiment where we trained an autoencoder whose encoder network uses a VGG13 architecture. This encoder produces a total of 244K features. We pretrained the autoencoder we both CIFAR10 and ImageNet datasets, so to compare the impact of the autoencoder training set size. The for this experiment are in the 3rd and 4th rows of Tab. 1 (Encoder (VGG13)). Although there is some improvement in both IS and FID, specially when using the Encoder pretrained with ImageNet, the boost is not comparable with the one obtained by using a VGG19 classifier. In other words, features from classifiers are significantly more informative than autoencoder features for the purpose of training generators by feature matching. In this section, we present experimental that evidence the advantage of our proposed ADAM moving average (AMA) over the simple moving average (MA). The main benefit of AMA is the promotion of stable training when using small minibatches. The ability to train with small minibatches is essential due to GFMN's need for large number of features from DCNNs, which becomes a challenge in terms of GPU memory usage. For instance, our Pytorch BID34 implementation of GFMN can only handle minibatches of size up to 160 when using VGG19 as a feature extractor and image size 64×64 on a Tesla K40 GPU w/ 12GB of memory. A more optimized implementation minimizing Pytorch's memory overhead could, in principle, handle somewhat larger minibatch sizes (as could a more recent Tesla V100 w/ 16 GB). However, if we increase the image size or the feature extractor size, the memory footprint increases quickly and we will always run out of memory when using larger minibatches, regardless of implementation or hardware. For the experiments presented in this section, we use CelebA as the training set, and the feature extractor is the encoder from an autoencoder that follows a DCGAN-like architecture. We use this feature extractor because it is smaller than VGG19/Resnet18 and allows for minibatches of size up to 512 for image size 64×64. FIG1 shows generated images from GFMN when trained with either MA or our proposed AMA. For MA, we present generated images for GFMN trained with four different batch sizes: 64, 128, 256 and 512 (Figs. 3a, 3b, 3c and 3d, respectively). For AMA, we show for two different minibatch sizes: 64 and 512 (Figs. 3e and 3f, respectively). We can note that the minibatch size has a huge impact in the quality of generated images when training with MA. With minibatches smaller than 512 FIG1 ), almost all images generated by GFMN trained with MA are quite damaged. On the other hand, when using AMA, GFMN generates much better images even with minibatch size 64 FIG1. For AMA, increasing the minibatch size from 64 to 512 FIG1 does not seem to improve the quality of generated images for the given dataset and feature extractor. In Appendix A.9, we show a comparison between MA and AMA when VGG19 ImageNet classifier is used as a feature extractor. A minibatch size of 64 is used for that experiment. We can see in FIG9 that AMA also has a very positive effect in the quality of generated images when a stronger feature extractor is used. An alternative for training with larger minibatches would be the use of multi-GPU, multi-node setups. However, performing large scale experiments is beyond the scope of the current work. Moreover, many practitioners do not have access to a GPU cluster, and the availability of methods that can work on a single GPU with small memory footprint is essential. An important advantage of GFMN over adversarial methods is its training stability. FIG2 shows the evolution of the generator loss per epoch with some generated examples for an experiment where AMA is used. There is a clear correlation between the quality of generated images and the loss. Moreover, mode collapsing was not observed in our experiments with AMA. In order to evaluate the performance of GFMN for an even more challenging dataset, we trained GFMN VGG19 with different portions of the ImageNet dataset. FIG3 shows some (cherry picked) images generated by GFMN VGG19 trained on the ImageNet subset that contains different dog breeds (same as used in FIG0). The are quite impressive given that we perform unconditional generation. FIG3 presents (randomly sampled) images generated by GFMN VGG19 trained with the daisy portion of ImageNet. More can be found in Appendix A.1. In Tab. 3, we compare GFMN with different adversarial and non-adversarial approaches for CIFAR10 and STL10. The table includes for recent models that, like ours, use a DCGAN-like (or CNN) architecture in the generator and do not use CIFAR10/STL10 labels while training the generator. Despite using a frozen cross-domain feature extractor, GFMN outperforms the other systems in terms of FID for both datasets, and achieves the best IS for CIFAR10.We performed additional experiments with a WGAN-GP architecture where: the discriminator is a VGG19 or a Resnet18; the discriminator is pretrained on ImageNet; the generator is pretrained on CIFAR10 through autoencoding. The objective of the experiment is to evaluate if WGAN-GP can benefit from DCNN classifiers pretrained on ImageNet. Although we tried different hyperparameter combinations, we were not able to successfully train WGAN-GP with VGG19 or Resnet18 discriminators. More details about this experiment in Appendix A.8. This work is driven towards answering the question of whether one can train effective generative models by performing feature matching on features extracted from pretrained neural networks. The goal is to avoid adversarial training, breaking away from the problematic min/max game completely. According to our experimental , the answer to our research question is yes. We achieve successful non-adversarial training of generative feature matching networks by introducing different key ingredients: a more robust way to compute the moving average of the mean features by using ADAM optimizer, which allows us to use small minibatches; the use of features from all layers of pretrained neural networks; the use of features from multiple neural networks at the same time (VGG19 + Resnet18); and the initialization of the generator network. Our quantitative in Tab. 3 show that GFMN achieves better or similar compared to the state-of-the-art Spectral GAN (SN-GAN) BID30 for both CIFAR10 and STL10. This is an impressive for a non-adversarial feature matching-based approach that uses pretrained cross-domain feature extractors and has stable training. When compared to other MMD approaches BID23 3.47±.03 GMMN+AE BID23 3.94±.04 VAE BID27 5.62 MMD GAN BID23 6.17±.07 MMD dist GAN BID2 6.39±.04 40.2 / -WGAN BID30 6.41±.11 42.6 / -7.57±.10 64.2 MMDrq GAN BID2 6.51±.03 39.9 / -WGAN-GP BID30 6.68±.06 40.2 / -8.42±.13 55.1 / -McGAN 6.97±.10 SN-GANs BID30 7.58±.12 25.5 / -8.79±.14 43.2 / -MoLM-1024 BID36 7.55±.08 25.0 / 20.3 MoLM-1536 BID36 7.90±.10 23.3 / 18.9 BID9 BID23 BID2 BID36, GFMN presents important distinctions (some of them already listed in Sec. 3) which make it an attractive alternative. Compared to GMMN and GMMN+AE, we can see in TAB2 that GFMN achieves far better . In Appendix A.10, we also show a qualitative comparison between GFMN and GMMN . The main reason why GFMN are significantly better than GMMN is because GFMN uses a strong, robust kernel function (a pretrained DCNN), which, together with our AMA trick, allows the training with small minibatches. On the other hand, the Gaussian kernel used in GMMN requires a very large minibatch size in order to work well, which is impractical due to memory limitations and computational cost. Compared to recent adversarial MMD methods (MMD GAN) BID23 BID2 ) GFMN also presents significantly better while avoiding the problematic min/max game. GFMN achieves similar to the Method of Learned Moments (MoLM) BID36, while using a much smaller number of features to perform matching. The best performing model from BID36, MoLM-1536, uses around 42 million moments to train the CIFAR10 generator, while our best GFMN model uses around 850 thousand moments/features only, almost 50x less. In other words, MoLM-1536 can be used in large-scale environments only, while GFMN can be used in single GPU environments. One may argue that the best from GFMN are obtained with feature extractors that were trained in a supervised manner (classifiers). However, there are two important points to note: we use a cross domain feature extractor and do not use labels from the target datasets (CIFAR10, STL10, MNIST, CelebA); since the accuracy of the classifier does not seem to be the most important factor for generating good features (VGG19 classifier produces better features although it is less accurate than Resnet18, see Appendix A.3); we are confident that GFMN will also achieve state-of-the-art when trained with features from classifiers trained using unsupervised methods such as the one recently proposed by BID4. In this work, we introduced GFMN, an effective non-adversarial approach to train generative models. GFMNs are demonstrated to achieve state-of-the-art while avoiding the challenge of defining and training an adversarial discriminator. Our feature extractors can be easily obtained and provide for a robust and stable training of our generators. Some interesting open questions include: what type of feature extractors other than classifiers and auto-encoders are good for GFMN? What architecture designs are better suited for the purpose of feature extraction in GFMN?A APPENDIX We trained GFMN VGG19 with different portions of the ImageNet dataset using images of size 64×64. Although we adopted this image size due to speedup and memory efficient purposes, GFMN is also effective to generate images of larger sizes. FIG4 shows (randomly sampled) images generated by GFMN VGG19 trained with the following ImageNet portions: pizza FIG4, daisy FIG4, breeds of dogs FIG4 and Persian cats FIG4. Note that in this experiment the generators are trained from scratch, there is not initialization of the generators. While pizza, daisy and Persian cats consist in single ImageNet classes, the breeds of dogs portion (same as use in BID43) consists in multiple classes and therefore is a more challenging task since we doing unconditional generation. For the experiments with ImageNet, we use a Resnet generator similar to the one used by BID30. In TAB4 we detail the neural net architectures used in our experiments. In both DCGAN-like generator and discriminator, an extra layer is added when using images of size 64×64. In VGG19 architecture, after each convolution, we apply batch normalization and ReLU. The Resnet generator is used for ImageNet experiments only. −1 learning rate, 0.9 momentum term, and weight decay set to 5 × 10 −4. We pick models with best top 1 accuracy on the validation set over 100 epochs of training; 29.14% for VGG19 (image size 32×32), and 39.63% for Resnet18 (image size 32×32). When training the classifiers we use random cropping and random horizontal flipping for data augmentation. When using VGG19 and Resnet18 as feature extractors in GFMN, we use features from the output of each ReLU that follows a conv. layer, for a total of 16 layers for VGG and 17 for Resnet18. We evaluate our models using two quantitative metrics: Inception Score (IS) and Fréchet Inception Distance (FID) BID15. We followed the same procedure used in previous work to calculate IS BID30 BID36. For each trained generator, we calculate the IS for randomly generated 5000 images and repeat this procedure 10 times (for a total of 50K generated images) and report the average and the standard deviation of the IS.We compute FID using two sample sizes of generated images: 5K and 50K. In order to be consistent with previous works BID30 BID36 and be able to directly compare our quantitative with theirs, the FID is computed as follows:• CIFAR10: the statistics for the real data are computed using the 50K training images. This (real data) statistics are used in the FID computation of both 5K and 50K samples of generated images. This is consistent with both BID30 and BID36 procedure to compute FID for CIFAR10 experiments.• STL10: when using 5K generated images, the statistics for the real data are computed using the set of 5K (labeled) training images. This is consistent with the FID computation of BID30. When using 50K generated images, the statistics for the real data are computed using a set of 50K images randomly sampled from the unlabeled STL10 dataset. FID computation is repeated 3 times and the average is reported. There is very small variance in the FID . The use a of pretrained decoder D to initialize the generator gives a good boost in performance when the feature extractor is the decoder from a pretrained autoencoder (see Sec. 4.2.1). In Fig. 7, we show a visual comparison that demonstrates the effect of using the pretrained decoder D to initialize the generator. The generators for the images in the first row (7a, 7b, 7c) were not initialized, while the generators for the images in the second row (7d, 7e, 7f) were initialized. For the three datasets, we can see a significant improvement in image quality when the generator is initialized with the decoder. We use the Laplacian pyramid loss to train CIFAR10 and CelebA AEs. However, L2 loss gives almost as good as Lap1 loss. DISPLAYFORM0 Figure 7: Generated samples from GFMN using pretrained encoder as feature extractor. Visual comparison of models generated without (top row) and with (bottom row) initialization of the generation. The experimental in Sec. 4.2.2 demonstrate that cross-domain feature extractors based on DCNNs classifiers are very successful. For instance, we successfully used a VGG19 pretrained on ImageNet to train a GFMN generator for the CelebA dataset. Here, we investigate the impact of the pretraining of autoencoder-based feature extractors in a cross-domain setting. The objective is to further verify if GFMN is dependent on pretraining the autoencoder feature extractors on the same training data where the generator is trained. In Tab. 6, we show the for different combinations of cross-domain feature extractors and G initialization for STL10 and CIFAR10. The subscript indicates which dataset was used for pretraining. We can see in Tab. 6 that, CIFAR10 using E STL and D STL has similar performance (even better IS) to using E CIFAR and D CIFAR. There is a performance drop when using E CIFAR and D CIFAR to train a STL10 generator. We believe this drop is related to the training set size. STL10 contains 100K (unlabeled) training examples while CIFAR10 contains 50K training images. FIG6 shows generated images from generators that were trained with a different number of layers employed to feature matching. In all the in FIG6, the VGG19 network was used to perform feature extraction. We can see a significant improvement in image quality when more layers are used. The objective of the experiments presented in this section is to evaluate if WGAN-GP can benefit from DCNN classifiers pretrained on ImageNet. In the experiments, we used a WGAN-GP architecture where: the discriminator is a VGG19 or a Resnet18; the discriminator is pretrained on ImageNet; the generator is pretrained on CIFAR10 through autoencoding. Although we tried different hyperparameter combinations, we were not able to successfully train WGAN-GP with VGG19 or Resnet18 discriminators. Indeed, the discriminator, being pretrained on ImageNet, can quickly learn to distinguish between real and fake images. This limits the reliability of the gradient information from the discriminator, which in turn renders the training of a proper generator extremely challenging or even impossible. This is a well-known issue with GAN training BID10 where the training of the generator and discriminator must strike a balance. This phenomenon is covered in ) Section 3 (illustrated in their FIG0) as one motivation for work like Wassertein GANs. If a discriminator can distinguish perfectly between real and fake early on, the generator cannot learn properly and the min/max game becomes unbalanced, having no good discriminator gradients for the generator to learn from, producing degenerate models. FIG8 shows some examples of images generated by the unsuccessfully trained models. In this appendix, we present a comparison between the simple moving average (MA) and ADAM moving average (AMA) for the case where VGG19 ImageNet classifier is used as a feature extractor. This experiment uses a minibatch size of 64. We can see in FIG9 that AMA has a very positive effect in the quality of generated images. GFMN trained with MA produces various images with some sort of crossing line artifacts. A.10 VISUAL COMPARISON BETWEEN GFMN AND GMMN GENERATED IMAGES. FIG10 shows a visual comparison between images generated by GFMN FIG10 ) and Generative Moment Matching Networks (GMMN) FIG10 ). GMMN generated images were obtained from BID23. In this appendix, we present a visual comparison between images generated by sampling directly from decoders of pretrained autoencoders, and images generated by GFMN generators which were initialized by the decoders. In FIG0 the images in the top row FIG0 and 12c) were generated by decoders trained using CelebA, CIFAR10 and STL10, respectively. Images in the bottom row FIG0 and 12f) were generated using GFMN generators that were initialized with CelebA, CIFAR10 and STL10 decoders, respectively. As expected, sampling from the decoder produces completely noisy images because the latent space is not aligned with the prior distribution p z. GFMN uses the pretrained decoder as a better starting point and learns an effective implicit generative model, as we can see in Figs. 12d, 12e and 12f. Nevertheless, as demonstrated in Sec. 4.2.1, GFMN is also very effective without generator initialization, specially when using VGG19/Resnet18 feature extractors. Therefore, generator initialization is an interesting positive feature of GFMN, but not an essential aspect of the method. In this appendix, we assess whether GFMN is impacted by computing the mean features of the real data in a minibatch-wise fashion (computed in the minibatch and carried along with a moving average) vs. computing in a global manner (pre-computing the mean features using the whole training dataset, and keeping it fixed during the training). Note that this is for the real data only, for the fake data, we need to use a moving average because its mean features change a lot throughout the training process. In FIG1, we show generated images from GFMN trained with either simple Moving Average (MA) (13a, 13b, 13d and 13e) or Adam Moving Average (AMA) (13c and 13f). For MA, two minibatch sizes (mbs) are used: 64 and 128. Images in the top row were generated by models that perform feature matching using the minibatch-wise mean of the features from the real data, while the models that generated the images in the bottom row used global mean (gm) features computed in the whole CelebA dataset. We can see in FIG1 that using the global mean features does not improve the performance when training with MA, and also does not seem to have any impact when training with AMA.A.13 AUTOENCODER FEATURES VS. VGG19 FEATURES FOR CELEBA.In this appendix, we present a comparison in image quality for autoencoder features vs. VGG19 features for the CelebA dataset. We show for both simple moving average (MA) and ADAM moving average (AMA), for both cases we use a minibatch size of 64. In Fig. 14, we show generated images from GFMN trained with either VGG19 features (top row) or autoencoder (AE) features (bottom row). We show images generated by GFMN models trained with simple moving average (MA) and Adam moving average (AMA). We can note in the images that, although VGG19 features are from a cross-domain classifier, they lead to much better generation quality than AE features, specially for the MA case. FIG1. For MA, two minibatch sizes (mbs) are used: 64 and 128. Images in the top row were generated by models that perform feature matching using the minibatch-wise mean of the features from the real data, while the models that generated the images in the bottom row used global mean (gm) features computed in the whole CelebA dataset. FIG2: Generated images from GFMN trained with either VGG19 features (top row) or autoencoder (AE) features (bottom row). We show images generated by GFMN models trained with simple moving average (MA) and Adam moving average (AMA). Although VGG19 features are from a cross-domain classifier, they perform much better than AE features, specially for the MA case.
A new non-adversarial feature matching-based approach to train generative models that achieves state-of-the-art results.
967
scitldr
We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights. We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters. We report consistent with state-of-the-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset. We explore artificially down-sampling a fraction of images in the training set, which improves our performance. Our experiments therefore lead us to hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications. Humans are able to learn to recognize new object categories on a single or small number of examples. This has been demonstrated in a wide range of activities from hand-written character recognition BID10, and motor control BID2 ), to acquisition of high level concepts BID9. Replicating this kind of behavior in machines is the motivation for studying few-shot learning. Parametric deep learning has been performing well in settings with abundance of data. In general, deep learning models have a very high functional expressivity and capacity, and rely on being slowly, iteratively trained in a supervised regime. An influence of a particular example within the training set is therefore small, as the training is designed to capture the general structure of the dataset. This prevents rapid introduction of new classes after training. BID15 In contrast, few-shot learning requires very fast adaptation to new data. In particular, k-shot classification refers to a regime where classes unseen during training must be learned using k labeled examples. Non-parametric models, such as k-nearest neighbors (kNN) do not overfit, however, their performance strongly depends on the choice of distance metric. BID0 Architectures combining parametric and non-parametric models, as well as matching training and test conditions, have been performing well on k-shot classification recently. In this paper we develop a novel architecture based on prototypical networks used in BID16, and train it and test it on the Omniglot dataset BID9. Vanilla prototypical networks map images into embedding vectors, and use their clustering for classification. They divide a batch into support, and query images, and use the embedding vectors of the support set to define a class prototype -a typical embedding vector for a given class. Proximity to these is then used for classification of query images. Our model, which we call the Gaussian prototypical network, maps an image into an embedding vector, and an estimate of the image quality. Together with the embedding vector, a confidence region around it is predicted, characterized by a Gaussian covariance matrix. Gaussian prototypical networks learn to construct a direction and class dependent distance metric on the embedding space. We show that our model is a preferred way of using additional trainable parameters compared to increasing dimensionality of vanilla prototypical networks. Our goal is to show that by allowing our model to express its confidence in individual data points, we reach better . We also experiment with intentionally corrupting parts of our dataset in order to explore the extendability of our method to noisy, inhomogeneous real world datasets, where weighting individual data points might be crucial for performance. We report, to our knowledge, performance consistent with state-of-the-art in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset. BID9 By studying the response of our model to partially down-sampled data, we hypothesize that its advantage might be more significant in lower quality, inhomogeneous datasets. This paper is structured as follows: We describe related work in Section 2. We then proceed to introduce our methods in Section 3. The episodic training scheme is also presented there. We discuss the Omniglot dataset in Section 4, and our experiments in Section 5. Finally, our are presented in Section 6. Non-parametric models, such as k-nearest neighbors (kNN), are ideal candidates for few-shot classifiers, as they allow for incorporation of previously unseen classes after training. However, they are very sensitive to the choice of distance metric. BID0 Using the distance in the space of inputs directly (e.g. raw pixel values) does not produce high accuracies, as the connection between the image class and its pixels is very non-linear. A straightforward modification in which a metric embedding is learned and then used for kNN classification has yielded good , as demonstrated by BID5, BID18, BID8, and BID1. An approach using matching networks has been proposed in BID17, in effect learning a distance metric between pairs of images. A noteworthy feature of the method is its training scheme, where each mini-batch (called an episode) tries to mimic the data-poor test conditions by sub-sampling the number of classes as well as numbers of examples in each. It has been demonstrated that such an approach improves performance on few-shot classification. BID17 We therefore use it as well. Instead of learning on the dataset directly, it has recently been proposed BID13 to train an LSTM BID6 to predict updates to a few-shot classifier given an episode as its input. This approach is referred to as meta-learning. Meta-learning has been reaching high accuracies on Omniglot BID9, as demonstrated by BID4, and BID12. A task-agnostic meta-learner based on temporal convolutions has been proposed in BID11, currently outperforming other approaches. Combinations of parametric and non-parametric methods have been the most successful in few-shot learning recently. BID16 BID14 BID7 Our approach is specific to classification of images, and does not attempt to solve the problem via meta-learning. We build on the model presented in BID16, which maps images into embedding vectors, and uses their clustering for classification. The novel feature of our model is that it predicts its confidence about individual data points via a learned, image-dependent covariance matrix. This allows it to construct a richer embedding space to which images are projected. Their clustering under a direction and class-dependent distance metric is then used for classification. FIG1 A diagram of the function of the Gaussian prototypical network. An encoder maps an image into a vector in the embedding space (dark circles). A covariance matrix is also output for each image (dark ellipses). Support images are used to define the prototypes (stars), and covariance matrices (light ellipses) of the particular class. The distances between centroids, and encoded query images, modified by the total covariance of a class, are used to classify query images. The distances are shown as dashed gray lines for a particular query point. In this paper, we build on prototypical networks described in BID16. We extend their architecture to what we call a Gaussian prototypical network, allowing the model to reflect quality of individual data points (images) by predicting their embedding vectors as well as confidence regions around them, characterized by a Gaussian covariance matrix. A vanilla prototypical network comprises an encoder that maps an image into an embedding vector. A batch contains a subset of available training classes. In each iteration, images for each class are randomly split into support, and query images. The embeddings of support images are used to define class prototypes -embedding vectors typical for the class. The proximity of query image embeddings to the class prototypes is used for classification. The encoder architectures of vanilla and Gaussian prototypical networks do not differ. The key difference is the way encoder outputs are interpreted, used, and how the metric on the embedding space is constructed. The function of the Gaussian network is presented in FIG1. We use a multi-layer convolutional neural network without an explicit, final fully connected layer to encode images into high-dimensional Euclidean vectors. For a vanilla prototypical network, the encoder is a function taking an image I and transforming it into a vector x as DISPLAYFORM0 where H and W are the height and width of the input image, and C is the number of its channels. D is the embedding dimension of our vector space which is a hyperparameter of the model. θ are the trainable weights of the encoder. For a Gaussian prototypical network, the output of the encoder is a concatenation of an embedding vector x ∈ R D and real vector s raw ∈ R D S relevant to the covariance matrix Σ ∈ R D×D. Therefore DISPLAYFORM1 where D S is the number of degrees of freedom of the covariance matrix. We explore three variants of the Gaussian prototypical network: a) Radius covariance estimate. D S = 1 and only a single real number s raw ∈ R 1 is generated per image. As such the covariance matrix has the form Σ = diag (σ, σ, . . ., σ), where σ is calculated from the raw encoder output s raw. The confidence estimate is therefore not direction-sensitive. b) Diagonal covariance estimate. D S = D and the dimension of the covariance estimate is the same as of the embedding space. s raw ∈ R D is generated per image. Therefore the covariance matrix has the form Σ = diag (σ), where σ is calculated from the raw encoder output s raw. This allows the network to express direction-dependent confidence about a data point. c) Full covariance estimate. A full covariance matrix is output per data point. This method proved to be needlessly complex for the tasks given and therefore was not explored further. We were using 2 encoder architectures: 1) a small architecture, and 2) a big architecture. The small architecture corresponded to the one used in BID16, and we used it to validate our own experiments with respect. The big architecture was used to see the effect of an increased model capacity on accuracy. As a basic building block, we used the sequence of layers in Equation 3. DISPLAYFORM2 Both architectures were composed of 4 such blocks stacked together. We explored 4 different methods of translating the raw covariance matrix output of the encoder into an actual covariance matrix. Since we primarily deal with the inverse of the covariance matrix S = Σ −1, we were predicting it directly. Let the relevant part of the raw encoder output be S raw. The methods are as follows: a) S = 1 + softplus (S raw), where softplus(x) = log (1 + e x) and it is applied componentwise. Since softplus(x) > 0, this guarantees S > 1 and the encoder can only make data points less important. The value of S is also not limited from above. Both of these approaches prove beneficial for training. Our best models used this regime for initial training. b) S = 1 + sigmoid (S raw), where sigmoid(x) = 1/ (1 + e −x) and it is applied componentwise. Since sigmoid(x) > 0, this guarantees S > 1 and the encoder can only make data points less important. The value of S is bounded from above, as S < 2, and the encoder is therefore more constrained. c) S = 1 + 4 sigmoid (S raw) and therefore 1 < S < 5. We used this to explore the effect of the size of the domain of covariance estimates on performance.d) S = offset + scale × softplus (S raw /div), where offset, scale, and div are initialized to 1.0 and are trainable. Our best models used this regime for late-stage training, as it is more flexible and data-driven than a). A key component of the prototypical model is the episodic training regime described in BID16 and modeled on BID17. During training, a subset of N c classes is chosen from the total number of classes in the training set (without replacement). For each of these classes, N s support examples are chosen at random, as well as N q query examples. The encoded embeddings of the support examples are used to define where a particular class prototype lies in the embedding space. The distances between the query examples and positions of class prototypes are used to classify the query examples and to calculate loss. For the Gaussian prototypical network, the covariance of each embedding point is estimated as well. A diagram of the process is shown in FIG1.For a Gaussian prototypical network, the radius or a diagonal of a covariance matrix is output together with the embedding vector (more precisely its raw form is, as detailed in Section 3.1). These are then used to weight the embedding vectors corresponding to support points of a particular class, as well as to calculate a total covariance matrix for the class. The distance d c (i) from a class prototype c to a query point i is calculated as DISPLAYFORM0 where p c is the centroid, or prototype, of the class c, and DISPLAYFORM1 is the inverse of its class covariance matrix. The Gaussian prototypical network is therefore able to learn class and directiondependent distance metric in the embedding space. We found that the speed of training and its accuracy depend strongly on how distances are used to construct a loss. We conclude that the best option is to work with linear Euclidean distances, i.e. d c (i). The specific form of the loss function used is presented in Algorithm 1. A critical part of a prototypical network is the creation of a class prototype from the available support points of a particular class. We propose a variance- where • denotes a component-wise multiplication, and the division is also component-wise. The diagonal of the inverse of the class covariance matrix is then calculated as DISPLAYFORM0 This corresponds to the optimal combination of Gaussians centered on the individual points into an overall class Gaussian, hence the name of the network. The elements of s are effectively 1/σ 2. Equations 5 and 6 therefore correspond to weighting by 1/σ 2. The full algorithm is described in Algorithm 1. To estimate the accuracy of a model on the test set, we classify the whole test set for every number of support points N s = k in the range k ∈ [1, ..19]. The number of query points for a particular k is therefore N q = 20 − N s, as Omniglot provides 20 examples of each class. The accuracies are then aggregated, and for a particular stage of the model training a k-shot classification accuracy as a function of k is determined. Since we are not using a designated validation set, we ensure our impartiality by considering the test for the 5 highest training accuracies, and calculate their mean and standard deviation. By doing that, we prevent optimizing our for the test set, and furthermore obtain error bounds on the ing accuracies. We evaluate our models in 5-way and 20-way test classification to directly compare to existing literature. We used the Omniglot dataset. BID9 Omniglot contains 1623 character classes from 50 alphabets (real and fictional) and 20 hand-written, gray-scale, 105 × 105 pixel examples of each. We down-sampled them to 28 × 28 × 1, subtracted their mean, and inverted them. We were using the recommended split to 30 training alphabets, and 20 test alphabets, as suggested by BID9, and used by BID16. The training set included overall 964 unique character classes, and the test set 659 of them. There was no class overlap between the training and test datasets. We did not use a separate validation set as we did not fine-tune hyperparameters and chose the best performing model based on training accuracies alone (see Section 3.4).To extend the number of classes, we augmented the dataset by rotating each character by 90•, 180•, and 270•, and defined each rotation to be a new character class on its own. The same approach is used in BID17, and BID16. An example of an augmented character is shown in FIG2. This increased the number of classes 4-fold. In total, there were 77,120 images Let the image i embedding vector be x and its true class y for class c in classes DISPLAYFORM0 summed over c end for L ← L/batch size loss per batch for comparability end for •, 180•, and 270 •. Each rotation is then defined as a new class. This enhances the number of classes, but also introduces degeneracies for symmetric characters.in the training set, and 52,720 images in the test set. Due to the rotational augmentation, characters that have a rotational symmetry were nonetheless defined as multiple classes. As even a hypothetical perfect classifier would not be able to differentiate e.g. the character "O" from a rotated "O", 100 % accuracy was not reachable. We conducted a large number of few-shot learning experiments on the Omniglot dataset. For Gaussian prototypical networks, we explored different embedding space dimensionalities, ways of generating the covariance matrix, and encoder capacities (see Section 3.1 for details). We also compared them to vanilla prototypical networks, and showed that our Gaussian variant is a favorable way of using additional trainable parameters compared to increasing embedding space dimensionality. We found that predicting a single number per embedding point (the radius method in Section 3.1) works the best on Omniglot. In general, we explored the size of the encoder (small, and big, as described in Section 3), the Gaussian/vanilla prototypical network comparison, the distance metric (cosine, √ L 2, L 2, and L 2 2), the number of degrees of freedom of the covariance matrix in the Gaussian networks (radius, and diagonal estimates, see Section 3.1), and the dimensionality of the embedding space. We also explored augmenting the input dataset by down-sampling a subset of it to encourage the usage of covariance estimates by the network, and found that this improves (k > 1)-shot performance. We were using the Adam optimizer with an initial learning rate of 2 × 10 −3. We halved the learning rate every 2000 episodes ≈ 30 epochs. All our models were implemented in TensorFlow, and ran on a single NVidia K80 GPU in Google Cloud. The training time of each model was less than a day. We trained our models with N c = 60 classes (60-way classification) at training time, and tested on N ct = 20 classes (20-way) classification. For our best-performing models, we also conducted a final N ct = 5 (5-way) classification test to compare our to literature. During training, each class present in the mini-batch comprised N s = 1 support points, as we found that limiting the number of support points leads to better accuracies. This could intuitively be understood as matching the training regime to the test regime, as done in BID17. The remaining N q = 20 − N s = 19 images per class were used as query points. We verified, provided that the covariance estimate is not needlessly complex, that using encoder outputs as covariance estimates is more advantageous than using the same number of parameters as additional embedding dimension. This holds true for the radius estimate (i.e. one real number per embedding vector), however, the diagonal estimate does not seem to help with performance (keeping the number of parameters equal). This effect is shown in FIG3. The best performing model was initially trained on the undamaged dataset for 220 epochs. The training then continued with 1.5% of images down-sampled to 24 × 24, 1.0% down-sampled to 20 × 20, and 0.5% down-sampled to 16 × 16 for 100 epochs. Then with 1.5% down-sampled to 23 × 23 and 1.0% down-sampled to 17 × 17 for 20 epochs, and 1.0% down-sampled to 23 × 23 for 10 epochs. These choices were quite arbitrary and not optimized over. The purposeful damage to the dataset encouraged usage of the covariance estimate and increased (k > 1)-shot , as demonstrated in Figure 4.The comparison of our models to from literature is presented in Table 1. To our knowledge, our models perform consistently with state-of-the-art in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset. In 5-shot 5-way classification in particular, we are reaching very close to perfect performance (99.73 ± 0.04 %) and therefore conclude that a more complex dataset is needed for further few-shot learning algorithms development. In order to validate our assumption that the Gaussian prototypical network outperforms the vanilla version due to its ability to predict covariances of individual embedded images and therefore the Figure 4: The effect of down-sampling a part of the training set on k-shot test accuracy. The network trained on purposefully damaged data outperforms the one trained on unmodified data, as it learns to utilize covariance estimates better. Table 1: The of our experiments as compared to other papers. To our knowledge, our models perform consistently with state-of-the-art in 1-shot and 5-shot classification both in 5-way and 20-way regime on the Omniglot dataset. 5-way Model 1-shot 5-shot 1-shot 5-shot Matching networks BID17 93.8% 98.5% 98.1% 98.9% Matching networks BID17 93.5 % 98.7 % 97.9% 98.7% Neural statistician BID3 93.2 % 98.1 % 98.1% 99.5% Prototypical network BID16 96.0 % 98.9 % 98.8% 99. 7% 98.7 ± 0.4% 99.9 ± 0.3% BID12 98.9 % TCML BID11 97.64 ± 0.30% 99.36 ± 0.18% 98.96 ± 0.20% 99.75 ± 0.11% Gauss (radius) (ours) 97.02 ± 0.40% 99.16 ± 0.11% 99.02 ± 0.11% 99.66 ± 0.04% Gauss (radius) damage (ours) 96.94 ± 0.31% 99.29 ± 0.09% 99.07 ± 0.07% 99.73 ± 0.04% possibility to down-weight them, we studied the distribution of predicted values of s (see Section 3.1 for details) for our best performing network for undamaged and damaged test data. The network was trained on partially down-sampled training data. For the undamaged test set, the vast majority of covariance estimates took the same value, indicating that the network did not use its ability to down-weight data points. However, for a partially downsampled test set, the distribution of magnitudes of covariance estimates got significantly broader. We interpret this as a confirmation that the network learned to put less emphasis on down-sampled images. A comparison of both distributions is shown in Figure 5. In this paper we proposed Gaussian prototypical networks for few-shot classification -an improved architecture based on prototypical networks BID16. We tested our models on the Omniglot dataset, and explored different approaches to generating a covariance matrix estimate together with an embedding vector. We showed that Gaussian prototypical networks outperform vanilla prototypical networks with a comparable number of parameters, and therefore that our architecture choice is beneficial. We found that estimating a single real number on top of an embedding vector works better than estimating a diagonal, or a full covariance matrix. We suspect that lower quality, less homogeneous datasets might prefer a more complex covariance matrix estimate. Contrary to BID16, we found that the best are obtained by training in the 1-shot regime. Our are consistent with state-of-the-art in 1-shot and 5-shot classification both in Figure 5: Predicted covariances for the original test set and a partially down-sampled version of it. The Gaussian network learned to down-weight damaged examples by predicting a higher s, as apparent from the heavier tail of the yellow distribution. The distributions are aligned together, as only the difference between the leading edge and a value influence classification.5-way and 20-way regime on the Omniglot dataset. Especially for 5-way classification, our are very close to perfect performance. We got better accuracies (in particular for (k > 1)-shot classification) by artificially down-sampling fractions of our training dataset, encouraging the network to fully utilize covariance estimates. We hypothesize that the ability to learn the embedding as well as its uncertainty would be even more beneficial for poorer-quality, heterogeneous datasets, which are commonplace in real world applications. There, down-weighting some data points might be crucial for faithful classification. This is supported by our experiments with down-sampling Omniglot.
A novel architecture for few-shot classification capable of dealing with uncertainty.
968
scitldr
We show that the output of a (residual) CNN with an appropriate prior over the weights and biases is a GP in the limit of infinitely many convolutional filters, extending similar for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GP with a comparable number of parameters. Convolutional Neural Networks (CNNs) have powerful pattern-recognition capabilities that have recently given dramatic improvements in important tasks such as image classification BID13. However, as CNNs are increasingly being applied in real-world, safety-critical domains, their vulnerability to adversarial examples BID27 BID15, and their poor uncertainty estimates are becoming increasingly problematic. Bayesian inference is a theoretically principled and demonstrably successful BID26 BID7 framework for learning in the face of uncertainty, which may also help to address the problems of adversarial examples BID9. Unfortunately, Bayesian inference in CNNs is extremely difficult due to the very large number of parameters, requiring highly approximate factorised variational approximations BID1 BID8, or requiring the storage BID16 ) of large numbers of posterior samples (; BID19 .Other methods such as those based on Gaussian Processes (GPs) are more amenable to Bayesian inference, allowing us to compute the posterior uncertainty exactly BID24. This raises the question of whether it might be possible to combine the pattern-recognition capabilities of CNNs with exact probabilistic computations in GPs. Two such approaches exist in the literature. First, deep convolutional kernels parameterise a GP kernel using the weights and biases of a CNN, which is used to embed the input images into some latent space before computing their similarity. The CNN parameters of the ing kernel then have to be optimised by gradient descent. However, the large number of kernel parameters in the CNN reintroduces the risk of overconfidence and overfitting. To avoid this risk, we need to infer a posterior over the CNN kernel parameters, which is as difficult as directly inferring a posterior over the parameters of the original CNN. Second, it is possible to define a convolutional GP BID22 or a Furthermore, we show that two properties of the GP kernel induced by a CNN allow it to be computed very efficiently. First, in previous work it was necessary to compute the covariance matrix for the output of a single convolutional filter applied at all possible locations within a single image BID22, which was prohibitively computationally expensive. In contrast, under our prior, the downstream weights are independent with zero-mean, which decorrelates the contribution from each location, and implies that it is necessary only to track the patch variances, and not their covariances. Second, while it is still necessary to compute the variance of the output of a convolutional filter applied at all locations within the image, the specific structure of the kernel induced by the CNN means that the variance at every location can be computed simultaneously and efficiently as a convolution. Finally, we empirically demonstrate the performance increase coming from adding translationinvariant structure to the GP prior. Without computing any gradients, and without augmenting the training set (e.g. using translations), we obtain 0.84% error rate on the MNIST classification benchmark, setting a new record for nonparametric GP-based methods. For clarity of exposition, we will treat the case of a 2D convolutional NN. The applies straightforwardly to nD convolutions, dilated convolutions and upconvolutions ("deconvolutions"), since they can be represented as linear transformations with tied coefficients (see Fig. 1). The network takes an arbitrary input image X of height H and width DISPLAYFORM0 Each row, which we denote x 1, x 2,..., x C, corresponds to a channel of the image (e.g. C = 3 for RGB), flattened to form a vector. The first activations A (X) are a linear transformation of the inputs. For i ∈ {1, . . ., C }: DISPLAYFORM1 We consider a network with L hidden layers. The other activations of the network, from A (X) up to A (L+1) (X), are defined recursively: i,j corresponds to applying the filter to the µth convolutional patch of the channel x j. DISPLAYFORM2 represents the flattened jth channel of the image that from applying a convolutional filter to φ(A (X)).The structure of the pseudo-weight matrices W where it does, as illustrated in Fig. 1.The outputs of the network are the last activations, A (L+1) (X). In the classification or regression setting, the outputs are not spatially extended, so we have H (L+1) = D (L+1) = 1, which is equivalent to a fully-connected output layer. In this case, the pseudo-weights W Finally, we define the prior distribution over functions by making the filters U i,j and biases b i be independent Gaussian random variables (RVs). For each layer, channels i, j and locations within the filter x, y: DISPLAYFORM0 Note that, to keep the activation variance constant, the weight variance is divided by the number of input channels. The weight variance can also be divided by the number of elements of the filter, which makes it equivalent to the NN weight initialisation scheme introduced by BID10. We follow the proofs by BID18 and BID20 to show that the output of the CNN described in the previous section, A (L+1), defines a GP indexed by the inputs, X. Their proof BID18 proceeds by applying the multivariate Central Limit Theorem (CLT) to each layer in sequence, i.e. taking the limit as N → ∞, then N → ∞ etc, where N is the number of hidden units in layer. By analogy, we sequentially apply the multivariate CLT by taking the limit as the number of channels goes to infinity, i.e. C → ∞, then C → ∞ etc. While this is the simplest approach to taking the limits, other potentially more realistic approaches also exist BID20.The fundamental quantity we consider is a vector formed by concatenating the feature maps (or equivalently channels), a DISPLAYFORM0 This quantity (and the following arguments) can all be extended to the case of countably finite numbers of input points. Induction base case. For any pair of data points, X and X the feature-maps corresponding to the jth channel, aj (X, X) have a multivariate Gaussian joint distribution. This is because each element is a linear combination of shared Gaussian random variables: the biases, b DISPLAYFORM1 where 1 is a vector of all-ones. While the elements within a feature map display strong correlations, different feature maps are independent and identically distributed (iid) conditioned on the data (i.e. a( FORMULA1 i (X, X) and a Induction step. Consider the feature maps at the th layer, a j (X, X), to be iid multivariate Gaussian RVs (i.e. for j = j, a j (X, X) and a j (X, X) are iid). Our goal is to show that, taking the number of channels at layer to infinity (i.e. C → ∞), the same properties hold at the next layer (i.e. all feature maps, a ( +1) i (X, X), are iid multivariate Gaussian RVs). Writing eq. for two training examples, X and X, we obtain, DISPLAYFORM2 We begin by showing that a (+1) i (X, X) is a multivariate Gaussian RV. The first term is multivariate Gaussian, as it is a linear function of b DISPLAYFORM3, which is itself iid Gaussian. We can apply the multivariate CLT to show that the second term is also Gaussian, because, in the limit as C → ∞, it is the sum of infinitely many iid terms: a j (X, X) are iid by assumption, and W (+1) i,j are iid by definition. Note that the same argument applies to all feature maps jointly, so all elements of A (+1) (X, X) (defined by analogy with eq. 4) are jointly multivariate Gaussian. Following BID18, to complete the argument, we need to show that the output feature maps are iid, i.e. a DISPLAYFORM4 To show that they are independent, remember that a (+1) i (X, X) and a (+1) i (X, X) are jointly Gaussian, so it is sufficient to show that they are uncorrelated, and we can show that they are uncorrelated because the weights, W (+1) i,j are independent with zero-mean, eliminating any correlations that might arise through the shared RV, φ(a j (X, X)). In the appendix, we consider the more complex case where we take limits simultaneously. Here we derive a computationally efficient kernel corresponding to the CNN described in the previous section. It is surprising that we can compute the kernel efficiently because the feature maps, Published as a conference paper at ICLR 2019 a i (X), display rich covariance structure due to the shared convolutional filter. Computing and representing these covariances would be prohibitively computationally expensive. However, in many cases we only need the variance of the output, e.g. in the case of classification or regression with a final dense layer. It turns out that this propagates backwards through the convolutional network, implying that for every layer, we only need the "diagonal covariance" of the activations: the covariance between the corresponding elements of a DISPLAYFORM0 A GP is completely specified by its mean and covariance (kernel) functions. These give the parameters of the joint Gaussian distribution of the RVs indexed by any two inputs, X and X. For the purposes of computing the mean and covariance, it is easiest to consider the network as being written entirely in index notation, DISPLAYFORM0 where and + 1 denote the input and output layers respectively, j and i ∈ {1, . . ., C ( +1) } denote the input and output channels, and ν and µ ∈ {1, . . ., H ( +1) D (+1) } denote the location within the input and output channel or feature-maps. The mean function is thus easy to compute DISPLAYFORM1 i,j,µ,ν have zero mean, and W (+1) i,j,ν,µ are independent of the activations at the previous layer, φ(A j,ν (X)). Now we show that it is possible to efficiently compute the covariance function. This is surprising because for many networks, we need to compute the covariance of activations between all pairs of locations in the feature map (i.e. C A DISPLAYFORM2 and this object is extremely high-dimensional, DISPLAYFORM3 However, it turns out that we only need to consider the "diagonal" covariance, (i.e. we only need C A DISPLAYFORM4 This is true at the output layer (L + 1): in order to achieve an output suitable for classification or regression, we use only a single output location H (L+1) = D (L+1) = 1, with a number of "channels" equal to the number of of outputs/classes, so it is only possible to compute the covariance at that single location. We now show that, if we only need the covariance at corresponding locations in the outputs, we only need the covariance at corresponding locations in the inputs, and this requirement propagates backwards through the network. Formally, as the activations are composed of a sum of terms, their covariance is the sum of the covariances of all those underlying terms, DISPLAYFORM5 As the terms in the covariance have mean zero, and as the weights and activations from the previous layer are independent, DISPLAYFORM6 using Eq., or some other nonlinearity. DISPLAYFORM7 +1) }; using Eq.. 6: end for 7: Output the scalar K DISPLAYFORM8 The weights are independent for different channels: DISPLAYFORM9 can eliminate the sums over j and ν: DISPLAYFORM10 The µth row of W DISPLAYFORM11 is zero for indices ν that do not belong to its convolutional patch, so we can restrict the sum over ν to that region. We also define v g (X, X), to emphasise that the covariances are independent of the output channel, j. The variance of the first layer is DISPLAYFORM12 And we do the same for the other layers, DISPLAYFORM13 where DISPLAYFORM14 is the covariance of the activations, which is again independent of the channel. The elementwise covariance in the right-hand side of Eq. can be computed in closed form for many choices of φ if the activations are Gaussian. For each element of the activations, one needs to keep track of the 3 distinct entries of the bivariate covariance matrix between the inputs, K DISPLAYFORM0 For example, for the ReLU nonlinearity (φ(x) = max(0, x)), one can adapt BID5 in the same way as Matthews et al. (2018a, section 3) to obtain DISPLAYFORM1 where θ DISPLAYFORM2 We now have all the pieces for computing the kernel, as written in Algorithm 1.Putting together Eq. and Eq. FORMULA1 gives us the surprising that the diagonal covariances of the activations at layer + 1 only depend on the diagonal covariances of the activations at layer. This is very important, because it makes the computational cost of the kernel be within a constant factor of the cost of a forward pass for the equivalent CNN with 1 filter per layer. Thus, the algorithm is more efficient that one would naively think. A priori, one needs to compute the covariance between all the elements of a Furthermore, the particular form for the kernel (eq. 1 and eq. 2) implies that the required variances and covariances at all required locations can be computed efficiently as a convolution. DISPLAYFORM0 The induction step in the argument for GP behaviour from Sec. 2.2 depends only on the previous activations being iid Gaussian. Since all the activations are iid Gaussian, we can add skip connections between the activations of different layers while preserving GP behaviour, e.g. A (+1) and A DISPLAYFORM0 where s is the number of layers that the skip connection spans. If we change the NN recursion (Eq. 2) to DISPLAYFORM1 then the kernel recursion (Eq. 11) becomes DISPLAYFORM2 This way of adding skip connections is equivalent to the "pre-activation" shortcuts described by BID11. Remarkably, the natural way of adding residual connections to NNs is the one that performed best in their empirical evaluations. We evaluate our kernel on the MNIST handwritten digit classification task. Classification likelihoods are not conjugate for GPs, so we must make an approximation, and we follow BID18, in re-framing classification as multi-output regression. The training set is split into N = 50000 training and 10000 validation examples. The regression targets Y ∈ {−1, 1} N ×10 are a one-hot encoding of the example's class: y n,c = 1 if the nth example belongs to class c, and −1 otherwise. Training is exact conjugate likelihood GP regression with noiseless targets Y BID24. First we compute the N ×N kernel matrix K xx, which contains the kernel between every pair of examples. Then we compute K −1 xx Y using a linear system solver. The test set has N T = 10000 examples. We compute the N T × N matrix K x * x, the kernel between each test example and all the training examples. The predictions are given by the row-wise maximum of K x * x K −1 xx Y. For the "ConvNet GP" and "Residual CNN GP", (Table 1) we optimise the kernel hyperparameters by random search. We draw M random hyperparameter samples, compute the ing kernel's performance in the validation set, and pick the highest performing run. The kernel hyperparameters are: σ 2 b, σ 2 w; the number of layers; the convolution stride, filter sizes and edge behaviour; the nonlinearity (we consider the error function and ReLU); and the frequency of residual skip connections (for Residual CNN GPs). We do not retrain the model on the validation set after choosing hyperparameters. Table 1: MNIST classification . #samples gives the number of kernels that were randomly sampled for the hyperparameter search. "ConvNet GP" and "Residual CNN GP" are random CNN architectures with a fixed filter size, whereas "ResNet GP" is a slight modification of the architecture by BID11. Entries labelled "SGD" used stochastic gradient descent for tuning hyperparameters, by maximising the likelihood of the training set. The last two methods use parametric neural networks. The hyperparameters of the ResNet GP were not optimised (they were fixed based on the architecture from b).The "ResNet GP" (Table 1) is the kernel equivalent to a 32-layer version of the basic residual architecture by BID10. The differences are: an initial 3 × 3 convolutional layer and a final dense layer instead of average pooling. We chose to remove the pooling because computing its output variance requires the off-diagonal elements of the filter covariance, in which case we could not exploit the efficiency gains described in Sec. 3.3.We found that the despite it not being optimised, the 32-layer ResNet GP outperformed all other comparable architectures (Table 1), including the NNGP in BID18, which is state-ofthe-art for non-convolutional networks, and convolutional GPs (van der ; BID14 . That said, our have not reached state-of-the-art for methods that incorporate a parametric neural network, such as a standard ResNet and a Gaussian process with a deep neural network kernel BID3.To check whether the GP limit is applicable to relatively small networks used practically (with of the order of 100 channels in the first layers), we randomly sampled 10, 000 32-layer ResNets, with 3, 10, 30 and 100 channels in the first layers, and, following the usual practice for ResNets we increase the number the number of hidden units when we downsample the feature maps. The probability density plots show a good match around 100 channels (FIG6, which matches a more sensitive graphical procedure based on quantile-quantile plots FIG6 . Notably, even for only 30 channels, the moments match closely FIG6). For comparison, typical ResNets use from 64 BID10 ) to 192 channels in their first layers. We believe that this is because the moment propagation equations only require the Gaussianity assumption for propagation through the relu, and presumably this is robust to non-Gaussian input activations. Computational efficiency. Asymptotically, computing the kernel matrix takes O(N 2 LD) time, where L is the number of layers in the network and D is the dimensionality of the input, and inverting the kernel matrix takes O(N 3). As such, we expect that for very large datasets, inverting the kernel matrix will dominate the computation time. However, on MNIST, N 3 is only around a factor of 10 larger than N 2 LD. In practice, we found that it was more expensive to compute the kernel matrix than to invert it. For the ResNet kernel, the most expensive, computing K xx, and K xx * for validation and test took 3h 40min on two Tesla P100 GPUs. In contrast, inverting K xx and computing validation and test performance took 43.25 ± 8.8 seconds on a single Tesla P100 GPU. Van der Wilk et al. BID22 ) also adapted GPs to image classification. They defined a prior on functions f that takes an image and outputs a scalar. First, draw a function g ∼ GP(0, k p (X, X)). Then, f is the sum of the output of g applied to each of the convolutional patches. Their approach is also inspired by convolutional NNs, but their kernel k p is applied to all pairs of patches of X and X. This makes their convolutional kernel expensive to evaluate, requiring, and 100 channels in their first layers. A Comparison of the empirical and limiting probability densities. B A more sensitive test of Gaussianity is a quantile-quantile plot, which shows converges with 100 channels. C The moments (variances and covariances) for 100 training inputs shows gives a good match for all numbers of channels.inter-domain inducing point approximations to remain tractable. The kernels in this work, directly motivated by the infinite-filter limit of a CNN, only apply something like k p to the corresponding pairs of patches within X and X (Eq. 10). As such, the CNN kernels are cheaper to compute and exhibit superior performance (Table 1), despite the use of an approximate likelihood function. BID14 define a prior over functions by stacking several GPs with van der Wilk's convolutional kernel, forming a "Deep GP" BID6. In contrast, the kernel in this paper confines all hierarchy to the definition of the kernel, and the ing GPs is shallow. BID3 improved deep kernel learning. The inputs to a classic GP kernel k (e.g. RBF) are preprocessed by applying a feature extractor g (a deep NN) prior to computing the kernel: k deep (X, X):= k(g(X; θ), g(X, θ)). The NN parameters are optimised by gradient ascent using the likelihood as the objective, as in standard GP kernel learning (, Chapter 5). Since deep kernel learning incorporates a state-of-the-art NN with over 10 6 parameters, we expect it to perform similarly to a NN applied directly to the task of image classification. At present both CNNs and deep kernel learning display superior performance to the GP kernels in this work. However, the kernels defined here have far fewer parameters (around 10, compared to their 10 6). also suggests that a CNN exhibits GP behaviour. However, they take the infinite limit with respect to the filter size, not the number of filters. Thus, their infinite network is inapplicable to real data which is always of finite dimension. Finally, there is a series of papers analysing the mean-field behaviour of deep NNs and CNNs which aims to find good random initializations, i.e. those that do not exhibit vanishing or exploding gradients or activations BID25 ). Apart from their very different focus, the key difference to our work is that they compute the variance for a single training-example, whereas to obtain the GPs kernel, we additionally need to compute the output covariances for different training/test examples . We have shown that deep Bayesian CNNs with infinitely many filters are equivalent to a GP with a recursive kernel. We also derived the kernel for the GP equivalent to a CNN, and showed that, in handwritten digit classification, it outperforms all previous GP approaches that do not incorporate a parametric NN into the kernel. Given that most state-of-the-art neural networks incorporate structure (convolutional or otherwise) into their architecture, the equivalence between CNNs and GPs is potentially of considerable practical relevance. In particular, we hope to apply GP CNNs in domains as widespread as adversarial examples, lifelong learning and k-shot learning, and we hope to improve them by developing efficient multi-layered inducing point approximation schemes. The key technical issues in the proof (and the key differences between BID18 BID21 arise from exactly how and where we take limits. In particular, consider the activations as being functions of the activities at the previous layer, DISPLAYFORM0 Now, there are two approaches to taking limits. First, both our argument in the main text, and the argument in BID18 is valid if we are able to take limits "inside" the network, DISPLAYFORM1 However, BID20 b) argue that is preferable to take limits "outside" the network. In particular, BID21 take the limit with all layers simultaneously, DISPLAYFORM2 where C = C (n) goes to infinity as n → ∞. That said, similar technical issues arise if we take limits in sequence, but outside the network. In the main text, we follow BID18 in sequentially taking the limit of each layer to infinity (i.e. C → ∞, then C → ∞ etc.). This dramatically simplified the argument, because taking the number of units in the previous layer to infinity means that the inputs from that layer are exactly Gaussian distributed. However, BID21 argue that the more practically relevant limit is where we take all layers to infinity simultaneously. This raises considerable additional difficulties, because we must reason about convergence in the case where the previous layer is finite. Note that this section is not intended to stand independently: it is intended to be read alongside BID21, and we use several of their without proof. Mirroring Definition 3 in BID21, we begin by choosing a set of "width" functions, C (n), for ∈ {1, . . ., L} which all approach infinity as n → ∞. In BID21, these functions described the number of hidden units in each layer, whereas here they describe the number of channels. Our goal is then to extend the proofs in BID21 (in particular, of theorem 4), to show that the output of our convolutional networks converge in distribution to a Gaussian process as n → ∞, with mean zero and covariance given by the recursion in Eqs..The proof in BID21 has three main steps. First, they use the Cramér-Wold device, to reduce the full problem to that of proving convergence of scalar random variables to a Gaussian with specified variance. Second, if the previous layers have finite numbers of channels, then the channels a j (X) and a j (X) are uncorrelated but no longer independent, so we cannot apply the CLT directly, as we did in the main text. Instead, they write the activations as a sum of exchangeable random variables, and derive an adapted CLT for exchangeable (rather than independent) random variables BID0. Third, they show that moment conditions required by their exchangeable CLT are satisfied. To extend their proofs to the convolutional case, we begin by defining our networks in a form that is easier to manipulate and as close as possible to Eq. in BID21, DISPLAYFORM0 DISPLAYFORM1 where, DISPLAYFORM2 The first step is to use the Cramér-Wold device (Lemma 6 in BID21, which indicates that convergence in distribution of a sequence of finite-dimensional vectors is equivalent to convergence on all possible linear projections to the corresponding real-valued random variable. Mirroring Eq. 25 in BID21, we consider convergence of random vectors, f DISPLAYFORM3 where L ⊂ X × N × {1, . . ., H D } is a finite set of tuples of data points and channel indicies, i, and indicies of elements within channels/feature maps, µ. The suffix [n] indicates width functions that are instantiated with input, n. Now, we must prove that these projections converge in distribution a Gaussian. We begin by defining summands, as in Eq. 26 in BID21, DISPLAYFORM4 such that the projections can be written as a sum of the summands, exactly as in Eq. 27 in BID21, DISPLAYFORM5 Now we can apply the exchangeable CLT to prove that T (L, α) [n] converges to the limiting Gaussian implied by the recursions in the main text. To apply the exchangeable CLT, the first step is to mirror Lemma 8 in BID21, in showing that for each fixed n and ∈ {2, . . ., L + 1}, the summands, γ j (L, α) [n] are exchangeable with respect to the index j. In particular, we apply de Finetti's theorem, which states that a sequence of random variables is exchangeable if and only if they are i.i.d. conditional on some set of random variables, so it is sufficient to exhibit such a set of random variables. Mirroring Eq. 29 in BID21, we apply the recursion, k,ξ (x)[n]: k ∈ {1, . . ., C ( −2) }, ξ ∈ {1, . . ., H ( −2) D (−2) }, x ∈ L X, where L X is the set of input points in L.The exchangeable CLT in Lemma 10 in BID21 indicates that T (L, α) [n] converges in distribution to N 0, σ 2 * if the summands are exchangeable (which we showed above), and if three conditions hold, DISPLAYFORM6 Condition a) follows immediately as the summands are uncorrelated and zero-mean. Conditions b) and c) are more involved as convergence in distribution in the previous layers does not imply convergence in moments for our activation functions. We begin by considering the extension of Lemma 20 in BID21, which allow us to show conditions b) and c) above, even in the case of unbounded but linearly enveloped nonlinearities (Definition 1 in BID21 . Lemma 20 states that the eighth moments of f (t)i,µ (x)[n] are bounded by a finite constant independent of n ∈ N. We prove this by induction. The base case is trivial, as f where g (−1) j∈{1,...,C ( −1) (n)},ν∈µth patch (x)[n] is the set of post-nonlinearities corresponding to j ∈ {1, . . ., C ( −1) (n)} and ν ∈ µth patch. Following BID21, observe that,. The x-axis gives GP prediction for the label probability. The points give corresponding proportion of test points with that label, and the bars give the proportion of training examples in each bin. DISPLAYFORM7
We show that CNNs and ResNets with appropriate priors on the parameters are Gaussian processes in the limit of infinitely many convolutional filters.
969
scitldr
We present an end-to-end design methodology for efficient deep learning deployment. Unlike previous methods that separately optimize the neural network architecture, pruning policy, and quantization policy, we jointly optimize them in an end-to-end manner. To deal with the larger design space it brings, we train a quantization-aware accuracy predictor that fed to the evolutionary search to select the best fit. We first generate a large dataset of <NN architecture, ImageNet accuracy> pairs without training each architecture, but by sampling a unified supernet. Then we use these data to train an accuracy predictor without quantization, further using predictor-transfer technique to get the quantization-aware predictor, which reduces the amount of post-quantization fine-tuning time. Extensive experiments on ImageNet show the benefits of the end-to-end methodology: it maintains the same accuracy (75.1%) as ResNet34 float model while saving 2.2× BitOps comparing with the 8-bit model; we obtain the same level accuracy as MobileNetV2+HAQ while achieving 2×/1.3× latency/energy saving; the end-to-end optimization outperforms separate optimizations using ProxylessNAS+AMC+HAQ by 2.3% accuracy while reducing orders of magnitude GPU hours and CO2 emission. Deep learning has prevailed in many real-world applications like autonomous driving, robotics, and mobile VR/AR, while efficiency is the key to bridge research and deployment. Given a constrained resource budget on the target hardware (e.g., latency, model size, and energy consumption), it requires an elaborated design of network architecture to achieve the optimal performance within the constraint. Traditionally, the deployment of efficient deep learning can be split into model architecture design and model compression (pruning and quantization). Some existing works (b; have shown that such a sequential pipeline can significantly reduce the cost of existing models. Nevertheless, careful hyper-parameter tuning is required to obtain optimal performance . The number of hyper-parameters grows exponentially when we consider the three stages in the pipeline together, which will soon exceed acceptable human labor bandwidth. To tackle the problem, recent works have applied AutoML techniques to automate the process. Researchers proposed Neural Architecture Search (NAS) (; ; a; b; ; ; a; b; ;) to automate the model design, outperforming the human-designed models by a large margin. Based on a similar technique, researchers adopt reinforcement learning to compress the model by automated pruning and automated quantization. However, optimizing these three factors in separate stages will lead to sub-optimal : e.g., the best network architecture for the full-precision model is not necessarily the optimal one after pruning and quantization. Besides, this three-step strategy also requires considerable search time and energy consumption . Therefore, we need a joint, end-to-end solution to optimize the deep learning model for a certain hardware platform. However, directly extending existing AutoML techniques to our end-to-end model optimization setting can be problematic. Firstly, the joint search space is cubic compared to stage-wise search, making the search difficult. Introducing pruning and quantization into the pipeline will also greatly increase the total search time, as both of them require time-consuming post-processing (e.g., finetuning) to get accuracy approximation ). Moreover, the search space of each step in pipeline is hard to be attested to be disentangle, and each step has its own optimization objective (eg. acc, latency, energy), so that the final policy of the pipeline always turns out to be sub-optimal. To this end, we proposed EMS, an end-to-end design method to solve this problem. Our approach is derived from one-shot NAS (; ; ; ; a;). We reorganize the traditional pipeline of "model design→pruning→quantization" into "architecture search + mixed-precision search". The former consists of both coarse-grained architecture search (topology, operator choice, etc.) and fine-grained channel search (replacing the traditional channel pruning ). The latter aims to find the optimal mixed-precision quantization policy trading off between accuracy and resource consumption. We work on both aspects to address the search efficiency. For architecture search, we proposed to train a highly flexible super network that supports not only the operator change but also fine-grained channel change, so that we can perform joint search over architecture and channel number. For the mixed-precision search, since quantized accuracy evaluation requires time-consuming fine-tuning, we instead use a predictor to predict the accuracy after quantization. Nevertheless, collecting data pairs for predictor training could be expensive (also requires fine-tuning). We proposed PredictorTransfer Technique to dramatically improve the sample efficiency. Our quantization-aware accuracy predictor is transferred from full-precision accuracy predictor, which is firstly trained on cheap data points collected using our flexible super network (evaluation only, no training required). Once the predictor P (arch, prune, quantization) is trained, we can perform search at ultra fast speed just using the predictor. With the above design, we are able to efficiently perform joint search over model architecture, channel number, and mixed-precision quantization. The predictor can also be used for new hardware and deployment scenarios, without training the whole system again. Extensive experiment shows the superiority of our method: while maintaining the same level of accuracy (75.1%) with ResNet34 float model, we achieve 2.2× reduction in BitOps compared to the 8-bit version; we obtain the same level accuracy as MobileNetV2+HAQ, and achieve 2×/1.3× latency/energy saving; our models outperform separate optimizations using ProxylessNAS+AMC+HAQ by 2.3% accuracy under same latency constraints, while reducing orders of magnitude GPU hours and CO 2 emission. The contributions of this paper are: • We devise an end-to-end methodology EMS to jointly perform NAS-pruning-quantization, thus unifying the conventionally separated stages into an integrated solution. • We propose a predictor-transfer method to tackle the high cost of the quantization-aware accuracy predictor's dataset collection NN architecture, quantization policy, accuracy. • Such end-to-end method can efficiently search efficient models. With the supernet and the quantization-aware accuracy predictor, it only takes minutes to search a compact model for a new platform, enabling automatic model adjustment in diverse deployment scenarios. Researchers have proposed various methods to accelerate the model inference, including architecture design (;, network pruning and network quantization (b). Neural Architecture Search. Tracing back to the development of NAS, one can see the reduction in the search time. ) use an RL agent to determine the cell-wise architecture. To efficiently search for the architecture, many later works viewed architecture searching as a path finding problem (a; b), it cuts down the search time by jointly training rather than iteratively training from scratch. Inspired by the path structure, some one-shot methods have been proposed to further leverage the network's weights in training time and begin to handle mixed-precision case for efficient deployment. Another line of works tries to grasp the information by a performance predictor , which reduces the frequent evaluation for target dataset when searching for the optimal. (b), SPOS: Single Path One-Shot , ChamNet , AMC , HAQ and EMS (Ours). EMS distinguishes from other works by directly searching mixed-precision architecture without extra interaction with target dataset. Pruning. Extensive works show the progresses achieved in pruning: in early time, researchers proposed fine-grained pruning (; 2016b) by cutting off the connections (i.e., elements) within the weight matrix. However, such kind of method is not friendly to the CPU and GPU and requires dedicated hardware (that supports sparse matrix multiplication) to perform the inference. Later, some researchers proposed channel-level pruning (; ; ; ; ; ;) by pruning the entire convolution channel based on some importance score (e.g., L1-norm) to enable acceleration on general-purpose hardware. However, both fine-grained pruning and channel-level pruning introduces an enormous search space as different layer has different sensitivities (e.g., the first convolution layer is very sensitive to be pruned as it extracts important low-level features; while the last layer can be easily pruned as it's very redundant). To this end, recent researches leverage the AutoML techniques to automate this exploration process and surpass the human design. Quantization. Quantization is a necessary technique to deploy the models on hardware platforms like FPGAs and mobile phones. (a) quantized the network weights to reduce the model size by grouping the weights using k-means. binarized the network weights into {−1, +1}; quantized the network using one bit for weights and two bits for activation; binarized each convolution filter into {−w, +w}; mapped the network weights into {−w N, 0, +w P} using two bits with a trainable range; explicitly regularized the loss perturbation and weight approximation error in a incremental way to quantize the network using binary or ternary weights. used 8-bit integers for both weights and activation for deployment on mobile devices. Some existing works explored the relationship between quantization and network architecture. HAQ proposed to leverage AutoML to determine the bit-width for a mixed-precision quantized model. A better trade-off can be achieved when different layers are quantized with different bits, showing the strong correlation between network architecture and quantization. Multi-Stage Optimization. Above methods are orthogonal to each other and a straightforward combination approach is to apply them sequentially in multiple stages i.e. NAS+Pruning+Quantization: • In the first stage, we can search the neural network architecture with the best accuracy on the target dataset (; b; a): • In the second stage, we can prune the channels in the model automatically : • In the third stage, we can quantize the model to mixed-precision to make full use of the emerging hardware architecture: However, this separation usually leads to a sub-optimal solution: e.g., the best neural architecture for the floating-point model may not be optimal for the quantized model. Moreover, frequent evaluations on the target dataset make such kind of methods time-costly: e.g., a typical pipeline as above can take about 300 GPU hours, making it hard for researchers with limited computation resources to do automatic design. Figure 1: An overview of our end-to-end design methodology. We first train an accuracy predictor for the full precision NN, then incrementally train an accuracy predictor for the quantized NN (predictor-transfer). Finally, evolutionary search is performed to find the specialized NN architecture that fits hardware constraints. Joint Optimization. Instead of optimizing NAS, pruning and quantization independently, joint optimization aims to find a balance among these configurations and search for the optimal strategy. To this end, the joint optimization objective can be formalized into: However, the search space of this new objective is tripled as original one, so it becomes challenging to perform joint optimization. We endeavor to unify NAS, pruning and quantization as joint optimization. The outline is: 1. Train a super network that covers a large search space and every sub-network can be directly extracted without re-training. 2. Build a quantization-aware accuracy predictor to predict quantized accuracy given a sub-network and quantization policy. 3. Construct a latency/energy lookup table and do resource constrained evolution search. Thereby, this joint optimization problem can be tackled in an end-to-end manner. Comparison with Recent Methods. The search space is quadratic when comparing to (b), since we need to take care of both architecture configuration and quantization policy rather than quantization policy only. Unlike , whose predictor only use full precision (FP) data to train, we face an unbalance ratio in full precision (FP) and mixed-precision (MP) data. Also, architecture configuration and quantization policy are orthogonal to each other, simply treating this problem as before will lead to a significant performance drop when training the predictor. Different from , which uses ResNet as super network's backbone and cannot handle a more efficient scenario when changing backbone into MobileNet due to large accuracy drop after quantization, our super network provides a more stable accuracy statistic after sampling network to do quantization. This gives us the opportunity to acquire quantization data simply by extract a sub-network and do quantization. The overall framework of our end-to-end design framework is shown in Figure 1. It consists of a highly flexible super network with fine-grained channels, an accuracy predictor, and evolution search jointly optimizing architecture, pruning, and quantization. Neural architecture search aims to find a good sub-network from a large search space. Traditionally, each sampled network is trained to obtain the actual accuracy , which is time consuming. Recent one-shot based NAS first trains a large, multi-branch network. At each time, a sub-network is extracted from the large network to directly evaluate the approximated accuracy. Such a large network is called super network. Since the choice of different layers in a deep neural network is largely independent, a popular way is to design multiple choices (e.g., kernel size, expansion ratios) for each layer. In this paper, we used a super network that supports different kernel sizes (i.e. 3, 5, 7) and channel number (i.e. 4×B to 6×B, 8 as internal, B is the base channel number in that block) in block level, and different depths (i.e. 2, 3, 4) in stage level. The combined search space contains more than 10 35 sub-networks, which is large enough to conduct neural architecture search. Properties of the Super Network. We also followed the one-shot setting to first build a super network and then perform search on top of it. To ensure efficient architecture search, we find that the super network needs to satisfy the following properties: For every extracted sub-network, the performance could be directly evaluated without re-training, so that the cost of training only need to be paid once. Support an extremely large and fine-grained search space to support channel number search. As we hope to incorporate pruning policy into architecture space, the super network not only needs to support different operators, but also fine-grained channel numbers (8 as interval). Thereby, the new space is significantly enlarged (nearly quadratic from 10 19 to 10 35). However, it is hard to achieve the two goals at the same time due to the nature of super network training: it is generally believed that if the search space gets too large (e.g., supporting fine-grained channel numbers), the accuracy approximation would be inaccurate (b). A large search space will in high variance when training the super network. To address the issue, We adopt progressive shrinking (PS) algorithm (a) to train the super network. Specifically, we first train a full sub-network with largest kernel sizes, channel numbers and depths in the super network, and use it as a teacher to progressively distill the smaller sub-networks sampled from the super network. During distillation, the trained sub-networks still update the weights to prevent accuracy loss. The PS algorithm effectively reduce the variance during super network training. By doing so, we can assure that the extracted sub-network from the super network preserves competitive accuracy without re-training. To reduce the cost for designs in various deployment scenarios, we propose to build a quantizationaware accuracy predictor P, which predicts the accuracy of the mixed-precision (MP) model based on architecture configurations and quantization policies. During search, we used the predicted accuracy acc = P (arch, prune, quantize) instead of the measured accuracy. The input to the predictor P is the encoding of the network architecture, the pruning strategy, and the quantization policy. Architecture and quantization policy encoding. We encode the network architecture block by block: for each building block (i.e. bottleneck residual block like MobileNetV2 We further concatenate the features of all blocks as the encoding of the whole network. Then for a 5-layer network, we can use a 75-dim(5×(3+4+2×4)=75) vector to represent such an encoding. In our setting, the choices of kernel sizes are, the choices of channel number depend on the base channel number for each block, and bitwidth choices are, there are 21 blocks in total to design. Accuracy Predictor. The predictor we use is a 3-layer feed-forward neural network with each embedding dim equaling to 400. As shown in the left of Figure 2, the input of the predictor is the one-hot encoding described above and the output is the predicted accuracy. Different from existing methods (a; b; a), our predictor based method does not require frequent evaluation of architecture on target dataset in the search phase. Once we have the predictor, we can integrate it with any search method (e.g. reinforcement learning, evolution, bayesian optimization, etc.) to perform end-to-end design over architecture-pruning-quantization at a negligible cost. However, the biggest challenge is how to collect a [architecture, quantization policy, accuracy] dataset to train the predictor for quantized models due to: 1) collecting quantized model's accuracy is time-consuming: fine-tuning is required to recover the accuracy after quantization, which takes about 0.2 GPU hours per data point. In fact, we find that 80k data pairs is a suitable size to train a good full precision accuracy predictor. If we collect a quantized dataset with the same size as the full-precision one, it can cost 16,000 GPU hours, which is far beyond affordable. 2) The quantization-aware accuracy predictor is harder to train than a traditional accuracy predictor on full-precision models: the architecture design and quantization policy affect network performance from two separate aspects, making it hard to model the mutual influence. Thus using traditional way to train quantization-aware accuracy predictor can in a significant performance drop (Table 2). Figure 2: Predictor-transfer technique. We start from a pre-trained full-precision predictor and add another input head (green square at bottom right) denoting quantization policy. Then fine-tune the quantization-aware accuracy predictor. Transfer Predictor to Quantized Models. Collecting a quantized NN dataset for training the predictor is difficult (needs finetuning), but collecting a full-precision NN dataset is easy: we can directly pick sub-networks from the super net and measure its accuracy. We propose the predictor-transfer technique to increase the sample efficiency and make up for the lack of data. As the order of accuracy before and after quantization is usually preserved, we first pre-train the predictor on a large-scale dataset to predict the accuracy of full-precision models, then transfer to quantized models. The quantized accuracy dataset is much smaller and we only perform short-term fine-tuning. As shown in Figure 2, we add the quantization bits (weights& activation) of the current block into the input embedding to build the quantization-aware accuracy predictor. We then further fine-tune the quantization-aware accuracy predictor using pre-trained FP predictor's weights as initialization. Since most of the weights are inherited from the full-precision predictor, the training requires much less data compared to training from scratch. As different hardware might have drastically different properties (e.g., cache size, level of parallelism), the optimal network architecture and quantization policy for one hardware is not necessarily the best for the other. Therefore, instead of relying on some indirect signals (e.g., BitOps), our optimization is directly based on the measured latency and energy on the target hardware. Measuring Latency and Energy. Evaluating each candidate policy on actual hardware can be very costly. Thanks to the sequential structure of neural network, we can approximate the latency (or energy) of the model by summing up the latency (or energy) of each layer. We can first build a lookup table containing the latency and energy of each layer under different architecture configurations and bit-widths. Afterwards, for any candidate policy, we can break it down and query the lookup table to directly calculate the latency (or energy) at negligible cost. In practice, we find that such practice can precisely approximate the actual inference cost. Resource-Constrained Evolution Search. We adopt the evolution-based architecture search to explore the best resource-constrained model. Based on this, we further replace the evaluation process with our quantization-aware accuracy predictor to estimate the performance of each candidate directly. The cost for each candidate can then be reduced from N times of model inference to only one time of predictor inference (where N is the size of the validation set). Furthermore, we can verify the resource constraints by our latency/energy lookup table to avoid the direct interaction with the target hardware. Given a resource budget, we directly eliminate the candidates that exceed the constraints. Table 2: Comparison with state-of-the-art efficient models for hardware with fixed quantization or mixed precision. Our method cuts down the marginal search time by two-order of magnitudes while achieving better performance than others. The marginal CO 2 emission (lbs) and cloud compute cost ($) is negligible for search in a new scenario. Data Preparation for Quantization-aware Accuracy Predictor. We generate two kinds of data (2,500 for each): 1. random sample both architecture and quantization policy; 2. random sample architecture, and sample 10 quantization policies for each architecture configuration. To speed up the data collection process, we use ImageNet-100 dataset. We mix the data for training the quantizationaware accuracy predictor, and use full-precision pretrained predictor's weights to transfer. The number of data to train a full precision predictor is 80,000. In that way, our quantization accuracy predictor can have the ability to generalize among different architecture/quantization policy pairs and learn the mutual relation between architecture and quantization policy. Evolutionary Architecture Search. For evolutionary architecture search, we set the population size to be 100, and choose Top-25 candidates to produce the next generation (50 by mutation, 50 by crossover). The mutation rate is 0.1, which is the same as that in . We set max iterations to 500, and choose the best candidate among the final population. Quantization. We follow the implementation in to do quantization. Specifically, we quantize the weights and activations with the specific quantization policies. For each layer with weights w with quantization bit b, we linearly quantize it to [−v, v], the quantized weight is: We set choose different v for each layer that minimize the KL-divergence D(w||w) between origin weights w and quantized weights w. For activation weights, we quantize it to [0, v] since the value is non-negative after ReLU6 layer. To verify the effectiveness of our methods, we conduct experiments that cover two of the most important constraints for on-device deployment: latency and energy consumption in comparison with some state-of-the-art models using neural architecture search. Besides, we compare BitOps with some multi-stage optimized models. Dataset, Models and Hardware Platform. The experiments are conducted on ImageNet dataset. We compare the performance of our end-to-end designed models with mixed-precision models searched by; b) and some SOTA fixed precision 8-bit models. The platform we used to measure the resource consumption for mixed-precision model is BitFusion , which is a state-of-the-art spatial ASIC design for neural network accelerator. It employs a 2D systolic array of Fusion Units which spatially sum the shifted partial products of two-bit elements from weights and activations. Figure 3: Comparison with mixed-precision models searched by HAQ under latency/energy constraints. When the constraint is strict, our model can outperform fixed precision model by more than 10% accuracy, and 5% compared with HAQ. Such performance boost may benefit from the dynamic architecture search space rather than fixed one as MobileNetV2. Figure 4: Comparison with sequentially designed mixed-precision models searched by AMC and HAQ (b; ; under latency constraints. Our end-to-end designed model while achieving better accuracy than sequentially designed models. 75.1 74.6 +0.5% Acc with 2.2x BitOps saving Table 2 presents the for different efficiency constraints. As one can see, our model can consistently outperform state-of-the-art models with either fixed or mixed-precision. Specifically, our small model (Ours-B) can have 2.2% accuracy boost than mixed-precision MobileNetV2 search by HAQ (from 71.9% to 74.1%); our large model (Ours-C) attains better accuracy (from 74.6% to 75.1%) while only requires half of BitOps. When applied with transfer technology, it does help for the model to get better performance (from 72.1% to 74.1%). It is also notable that the marginal cost for cloud computer and CO 2 emission is two orders of magnitudes smaller than other works. Comparison with MobileNetV2+HAQ. Figure 3 show the on the BitFusion platform under different latency constraints and energy constraints. Our end-to-end designed models consistently outperform both mixed-precision and fixed precision SOTA models under certain constraints. It is notable when constraint is tight, our models have significant improvement compared with stateof-the-art mixed-precision models. Specifically, with similar efficiency constraints, we improve the ImageNet top1 accuracy from the MobileNetV2 baseline 61.4% to 71.9% (+10.5%) and 72.7% (+11.3%) for latency and energy constraints, respectively. Moreover, we show some models searched by our quantization-aware predictor without predictor-transfer technique. With this technique applied, Right graph shows that when data is limited, predictor-transfer technique could largely improve the pairwise accuracy (from 64.6% to 75.6%). Using predictor-transfer technique, we can achieve 85% pairwise accuracy using less than 3k data points, while at least 4k data will be required without this technique. the accuracy can consistently have an improvement, since the non-transferred predictor might loss some mutual information between architecture and quantization policy. Comparison with multi-stage optimized Model. Figure 4 compares the multi-stage optimization with our joint optimization . As one can see, under the same latency/energy constraint, our model can attain better accuracy than the multi-stage optimized model (74.1% vs 71.8%). This is reasonable since the per-stage optimization might not find the global optimal model as end-to-end design does. Comparison under Limited BitOps. Figure 5 reports the with limited BitOps budget. As one can see, under a tight BitOps constraint, our model improves over 2% accuracy (from 71.5% to 73.9%) compared with searched model using . Moreover, our models achieve the same level accuracy (75.1%) as ResNet34 full precision model while only consumes half of the BitOps as 4-bit version (from 52.83G to 25.69G). Figure 6 shows the performance of our predictor-transfer technique compared with training from scratch. For each setting, we train the predictor to convergence and evaluate the pairwise accuracy (i.e. the proportion that predictor correctly identifies which is better between two randomly selected candidates from a held-out dataset), which is a measurement for the predictor's performance. As shown, the transferred predictor have a higher and faster pairwise accuracy convergence. Also, when the data is very limited, our method can have more than 10% pairwise accuracy over scratch training. We propose EMS, an end-to-end design method for architecting mixed-precision model. Unlike former works that decouple into separated stages, we directly search for the optimal mixed-precision architecture without multi-stage optimization. We use predictor-base method that can have no extra evaluation for target dataset, which greatly saves GPU hours for searching under an upcoming scenario, thus reducing marginally CO 2 emission and cloud compute cost. To tackle the problem for high expense of data collection, we propose predictor-transfer technique to make up for the limitation of data. Comparisons with state-of-the-art models show the necessity of joint optimization and prosperity of our end-to-end design method.
We present an end-to-end design methodology for efficient deep learning deployment.
970
scitldr
Distributed optimization is vital in solving large-scale machine learning problems. A widely-shared feature of distributed optimization techniques is the requirement that all nodes complete their assigned tasks in each computational epoch before the system can proceed to the next epoch. In such settings, slow nodes, called stragglers, can greatly slow progress. To mitigate the impact of stragglers, we propose an online distributed optimization method called Anytime Minibatch. In this approach, all nodes are given a fixed time to compute the gradients of as many data samples as possible. The is a variable per-node minibatch size. Workers then get a fixed communication time to average their minibatch gradients via several rounds of consensus, which are then used to update primal variables via dual averaging. Anytime Minibatch prevents stragglers from holding up the system without wasting the work that stragglers can complete. We present a convergence analysis and analyze the wall time performance. Our numerical show that our approach is up to 1.5 times faster in Amazon EC2 and it is up to five times faster when there is greater variability in compute node performance. The advent of massive data sets has ed in demand for solutions to optimization problems that are too large for a single processor to solve in a reasonable time. This has led to a renaissance in the study of parallel and distributed computing paradigms. Numerous recent advances in this field can be categorized into two approaches; synchronous;;; and asynchronous;. This paper focuses on the synchronous approach. One can characterize synchronization methods in terms of the topology of the computing system, either master-worker or fully distributed. In a master-worker topology, workers update their estimates of the optimization variables locally, followed by a fusion step at the master yielding a synchronized estimate. In a fully distributed setting, nodes are sparsely connected and there is no obvious master node. Nodes synchronize their estimates via local communications. In both topologies, synchronization is a key step. Maintaining synchronization in practical computing systems can, however, introduce significant delay. One cause is slow processing nodes, known as stragglers;;;;; S.. A classical requirement in parallel computing is that all nodes process an equal amount of data per computational epoch prior to the initiation of the synchronization mechanism. In networks in which the processing speed and computational load of nodes vary greatly between nodes and over time, the straggling nodes will determine the processing time, often at a great expense to overall system efficiency. Such straggler nodes are a significant issue in cloud-based computing systems. Thus, an important challenge is the design of parallel optimization techniques that are robust to stragglers. To meet this challenge, we propose an approach that we term Anytime MiniBatch (AMB). We consider a fully distributed topologyand consider the problem of stochastic convex optimization via dual averaging;. Rather than fixing the minibatch size, we fix the computation time (T) in each epoch, forcing each node to "turn in" its work after the specified fixed time has expired. This prevents a single straggler (or stragglers) from holding up the entire network, while allowing nodes to benefit from the partial work carried out by the slower nodes. On the other hand, fixing the computation time means that each node process a different amount of data in each epoch. Our method adapts to this variability. After computation, all workers get fixed communication time (T c) to share their gradient information via averaging consensus on their dual variables, accounting for the variable number of data samples processed at each node. Thus, the epoch time of AMB is fixed to T + T c in the presence of stragglers and network delays. We analyze the convergence of AMB, showing that the online regret achieves O(√m) performance, which is optimal for gradient based algorithms for arbitrary convex loss. In here, m is the expected sum number of samples processed across all nodes. We further show an upper bound that, in terms of the expected wall time needed to attain a specified regret, AMB is O(√ n − 1) faster than methods that use a fixed minibatch size under the assumption that the computation time follows an arbitrary distribution where n is the number of nodes. We provide numerical simulations using Amazon Elastic Compute Cloud (EC2) and show that AMB offers significant acceleration over the fixed minibatch approach. This work contributes to the ever-growing body of literature on distributed learning and optimization, which goes back at least as far as , in which distributed first-order methods were considered. Recent seminal works include , which considers distributed optimization in sensor and robotic networks, and , which considers stochastic learning and prediction in large, distributed data networks. A large body of work elaborates on these ideas, considering differences in topology, communications models, data models, etc.;;;. The two recent works most similar to ours are and , which consider distributed online stochastic convex optimization over networks with communications constraints. However, both of these works suppose that worker nodes are homogeneous in terms of processing power, and do not account for the straggler effect examined herein. The recent work;; S. proposed synchronous fixed minibatch methods to mitigate stragglers for master-worker setup. These methods either ignore stragglers or use redundancy to accelerate convergence in the presence of stragglers. However, our approach in comparison to;; S. utilizes work completed by both fast and slow working nodes, thus in faster wall time in convergence. In this section we outline our computation and optimization model and step through the three phases of the AMB algorithm. The pseudo code of the algorithm is provided in App. A. We defer discussion of detailed mathematical assumptions and analytical to Sec. 4.We suppose a computing system that consists of n compute nodes. Each node corresponds to a vertex in a connected and undirected graph G(V, E) that represents the inter-node communication structure. The vertex set V satisfies |V | = n and the edge set E tells us which nodes can communicate directly. Let N i = {j ∈ V : (i, j) ∈ E, i = j} denote the neighborhood of node i. The collaborative objective of the nodes is to find the parameter vector w ∈ W ⊆ R d that solves DISPLAYFORM0 The expectation E x [·] is computed with respect to an unknown probability distribution Q over a set X ⊆ R d. Because the distribution is unknown, the nodes must approximate the solution in using data points drawn in an independent and identically distributed (i.i.d.) manner from Q.; as its optimization workhorse and averaging consensus; to facilitate collaboration among nodes. It proceeds in epochs consisting of three phases: compute, in which nodes compute local minibatches; consensus, in which nodes average their dual variables together; and update, in which nodes take a dual averaging step with respect to the consensus-averaged dual variables. We let t index each epoch, and each node i has a primal variable w i (t) ∈ R d and dual variable z i (t) ∈ R d. At the start of the first epoch, t = 1, we initialize all primal variables to the same value w as and all dual variables to zero, i.e., z i = 0 ∈ R d. In here, h: W → R is a 1-strongly convex function. Compute Phase: All workers are given T fixed time to compute their local minibatches. During each epoch, each node is able to compute b i (t) gradients of f (w, x), evaluated at w i (t) where the data samples x i (t, s) are drawn i.i.d. from Q. At the end of epoch t, each node i computes its local minibatch gradient: DISPLAYFORM0 As we fix the compute time, the local minibatch size b i (t) is a random variable. Let b(t):= n i=1 b i (t) be the global minibatch size aggregated over all nodes. This contrasts with traditional approaches in which the minibatch is fixed. In Sec. 4 we provide a convergence analysis that accounts for the variability in the amount of work completed by each node. In Sec. 5, we presents a wall time analysis based on random local minibatch sizes. Consensus Phase: Between computational epochs each node is given a fixed amount of time, T c, to communicate with neighboring nodes. The objective of this phase is for each node to get (an approximation of) the following quantity: DISPLAYFORM1 The first term,z(t), is the weighted average of the previous dual variables. The second, g(t), is the average of all gradients computed in epoch t. The nodes compute this quantity approximately via several synchronous rounds of average consensus. Each node waits until it hears from all neighbors before starting a consensus round. As we have fixed communication time T c, the number of consensus rounds r i (t) varies across workers and epochs due to random network delays. Let P be a positive semi-definite, doubly-stochastic matrix (i.e., all entries of P are non-negative and all row-and column-sums are one) that is consistent with the graph G (i.e., DISPLAYFORM2 . At the start of the consensus phase, each node i shares its message m DISPLAYFORM3 As long as G is connected and the second-largest eigenvalue of P is strictly less than unity, the iterations are guaranteed to converge to the true average. For finite r i (t), each node will have an error in its approximation. Instead of, at the end of the rounds of consensus, node i will have DISPLAYFORM4 where ξ i (t) is the error. We use D (ri(t)) {y j} j∈V, i to denote the distributed averaging affected by r i (t) rounds of consensus. Thus, DISPLAYFORM5 We note that the updated dual variable z i (t + 1) is a normalized version of the distributed average solution, normalized by DISPLAYFORM6 Update Phase: After distributed averaging of dual variables, each node updates its primal variable as DISPLAYFORM7 where ·, · denotes the standard inner product. As will be discussed further in our analysis, in this paper we assume h: W → R to be a 1-strongly convex function and β(t) to be a sequence of positive non-decreasing parameters, i.e., β(t) ≤ β(t + 1). We also work in Euclidean space where h(w) = w 2 is a typical choice. In this section we analyze the performance of AMB in terms of expected regret. As the performance is sensitive to the specific distribution of the processing times of the computing platform used, we first present a generic analysis in terms of the number of epochs processed and the size of the minibatches processed by each node in each epoch. Then in Sec. 5, in order to illustrate the advantages of AMB, we assert a probabilistic model on the processing time and analyze the performance in terms of the elapsed "wall time". We assume that the feasible space W ∈ R d of the primal optimization variable w is a closed and bounded convex set where D = max w,u∈W w − u. Let · denote the 2 norm. We assume the objective function f (w, x) is convex and differentiable in w ∈ W for all x ∈ X. We further assume that f (w, x) is Lipschitz continuous with constant L, i.e. DISPLAYFORM0 Let ∇f (w, x) be the gradient of f (w, x) with respect to w. We assume the gradient of f (w, x) is Lipschitz continuous with constant K, i.e., ∇f (w, x) − ∇f (w, x) ≤ K w −w, ∀ x ∈ X, and ∀ w,w ∈ W.As mentioned in Sec. 3, DISPLAYFORM1 where the expectation is taken with respect to the (unknown) data distribution Q, and thus ∇F (w) = E[∇f (w, x)]. We also assume that there exists a constant σ that bounds the second moment of the norm of the gradient so that DISPLAYFORM2 Let the global minimum be denoted w *:= arg min w∈W F (w). First we bound the consensus errors. Let z(t) be the exact dual variable without any consensus errors at each node DISPLAYFORM0 The following Lemma bounds the consensus errors, which is obtained using (, Theorem 2) DISPLAYFORM1 i (t) be the output after r rounds consensus. Let λ 2 (P) be the second eigenvalue of the matrix P and let ≥ 0, then DISPLAYFORM2 if the number of consensus rounds satisfies DISPLAYFORM3 We characterize the regret after τ epochs, averaging over the data distribution but keeping a fixed "sample path" of per-node minibatch sizes b i (t). We observe that due to the time spent in communicating with other nodes via consensus, each node has computation cycles that could have been used to compute more gradients had the consensus phase been shorter (or nonexistent). To model this, let a i (t) denote the number of additional gradients that node i could have computed had there been no consensus phase. This undone work does not impact the system performance, but does enter into our characterization of the regret. Let c i (t) = b i (t) + a i (t) be the total number of gradients that node i had the potential to compute during the t-th epoch. Therefore, the total potential data samples processed in the t-th epoch is c(t) = n i=1 c i (t). After τ epochs the total number of data points that could have been processed by all nodes in the absence of communication delays is DISPLAYFORM4 An important quantity is the ratio of total potential computations in each epoch to that actually completed. Define the maximum such minibatch "skewness" as DISPLAYFORM5 It turns out that it is important to compute this skewness across epochs (i.e., c(t + 1) versus b(t)) in order to bound the regret via a telescoping sum. [Details can be found in the supplementary material.]In practice, a i (t) and b i (t) (and therefore c i (t)) depend on latent effects, e.g., how many other virtual machines are co-hosted on node i, and therefore we model them as random variables. We bound the expected regret for a fixed sample path of a i (t) and b i (t). The sample paths of importance are c tot (τ) = {c i (t)} i∈V,t∈ [τ] and b tot (τ) = {b i (t)} i∈V,t∈ [τ], where we introduce c tot and b tot for notational compactness. Define the average regret after τ epochs as DISPLAYFORM6 where the expectation is taken with respect the the i.i.d. sampling from the distribution Q. Then, we have the following bound on R(τ).Theorem 2 Suppose workers collectively processed m samples after τ epochs, cf., minibatch skewness parameter γ, cf. FORMULA0, and let c max = max t∈[τ] c(t), c avg = (1/τ) τ t=1 c(t) and δ = max {t,t}∈{1,τ −1} |c(t)−c(t)| be the maximum, average, and variation across c(t). Further, suppose the averaging consensus has additive accuracy, cf. Lemma 1. Then, the expected regret is DISPLAYFORM7 Theorem 2 is proved in App. B of the supplementary material. We now make a few comments about this . First, recall that the expectation is taken with respect to the data distribution, but holds for any sample path of minibatch sizes. Further, the regret bound depends only on the summary statistics c max, c avg, δ, and γ. These parameters capture the distribution of the processing speed at each node. Further, the impact of consensus error, which depends on the communication speed relative to the processing speed of each node, is summarized in the assumption of uniform accuracy on the distributed averaging mechanism. Thus, Theorem 2 is a sample path that depends only coarsely on the distribution of the speed of data processing. Next, observe that the dominant term is the final one, which scales in the aggregate number of samples m. The first term is approximately constant, only scaling with the monotonically increasing β and c max parameters. The terms containing characterizes the effect of imperfect consensus, which can be reduced by increasing the number of rounds of consensus. The effect of variability across c(t) is reflected in the terms containing the c max, c avg and δ parameters. If perfect consensus were achieved (= 0) then all components of the final term that scales in √ m would disappear except for the term that contains the minibatch skewness parameter γ. It is through this term that the amount of useful computation performed in each epoch (b i (t) ≤ c i (t)) enters the . In the special case of constant minibatch size c max = c avg and δ = 0, we have the following corollary. Corollary 3 If c(t) = c for all t ∈ [τ] and the consensus error ≤ 1/c, then the expected regret is DISPLAYFORM8 We can translate Theorem 2 and Cor. 3 to a regret bound averaged over the sample path. Since the summary statistics c max, c avg, δ, and γ are sufficient to bound the regret, we assert a joint distribution p over these terms rather than over the sample path b tot (τ), c tot (τ). For the following , we need only specify several moments of the distribution. In Sec. 5 we will take the further step of choosing a specific distribution p. DISPLAYFORM0 ] so thatm = τc is the expected total work that can be completed in τ epochs. Also, let DISPLAYFORM1 If averaging consensus has additive accuracy, then the expected regret is bounded by DISPLAYFORM2 Theorem 4 is proved in App. F of the supplementary material. Note that this expected regret is over both the i.i.d. choice of data samples and the i.i.d. choice of (b(t), c(t)) pairs. Corollary 5 If ≤ 1/c, the expected regret is DISPLAYFORM3 Remark 1 Note that by letting = 0, we can immediately find the for master-worker setup. In the preceding section we studied regret as a function of the number of epochs. The advantages of AMB is the reduction of wall time. That is, AMB can get to same convergence in less time than fixed minibatch approaches. Thus, in this section, we caracterize the wall time performance of AMB.In AMB, each epoch corresponds to a fixed compute time T. As we have already commented, this contrasts with fixed minibatch approaches where they have variable computing times. We refer "Fixed MiniBatch" methods as FMB. To gain insight into the advantages of AMB, we develop an understanding of the regret per unit time. We consider an FMB method in which each node computes computes b/n gradients, where b is the size of the global minibatch in each epoch. Let T i (t) denote the amount of time taken by node i to compute b/n gradients for FMB method. We make the following assumptions: DISPLAYFORM0 The time T i (t) follows an arbitrary distribution with the mean µ and the variance σ 2. Further, T i (t) is identical across node index i and epoch index t.. Assumption 2 If node i takes T i (t) seconds to compute b/n gradients in the t-th epoch, then it will take nT i (t)/b seconds to compute one gradient. Lemma 6 Let Assumptions 1 and 2 hold. Let the FMB scheme have a minibatch size of b. Letb be the expected minibatch size of AMB. Then, if we fix the computation time of an epoch in AMB to DISPLAYFORM1 Lemma 6 is proved in App. G and it shows that the expected minibatch size of AMB is at least as big as FMB if we fix T = (1 + n/b)µ. Thus, we get same (or better) expected regret bound. Next, we show that AMB achieve this in less time. Theorem 7 Let Assumptions 1 and 2 hold. Let T = (1 + n/b)µ and minibatch size of FMB is b. Let S A and S F be the total compute time across τ epochs of AMB and FMB, respectively, then DISPLAYFORM2 The proof is given in App. G. Lemma 6 and Theorem 7 show that our method attains the same (or better) bound on the expected regret that is given in Theorem 4 but is at most 1 + σ/µ √ n − 1 faster than traditional FMB methods. In Bertsimas et al. FORMULA1, it was shown this bound is tight and there is a distribution that achieves it. In our setup, there are no analytical distributions that exactly match with finishing time distribution. Recent papers on stragglers Lee et al. FORMULA0; S. use the shifted exponential distribution to model T i (t). The choice of shifted exponential distribution is motivated by the fact that it strikes a good balance between analytical tractability and practical behavior. Based on the assumption of shifted exponential distribution, we show that AMB is O(log(n)) faster than FMB. This is proved in App. H. To evaluate the performance of AMB and compare it with that of FMB, we ran several experiments on Amazon EC2 for both schemes to solve two different classes of machine learning tasks: linear regression and logistic regression using both synthetic and real datasets. In this section we present error vs. wall time performance using two experiments. Additional simulations are given in App. I We solved two problems using two datasets: synthetic and real. Linear regression problem was solved using synthetic data. The element of global minimum parameter, w * ∈ R d, is generated from the multivariate normal distribution N (0, I). The workers observe a sequence of pairs (x i (s), y i (s)) where s is the time index, data DISPLAYFORM0. The aim of all nodes is to collaboratively learn the true parameter w *. The data dimension is d = 10 5.For the logistic regression problem, we used the MNIST images of numbers from 0 to 9. Each image is of size 28 × 28 pixels which can be represented as a 784-dimensional vector. We used MNIST training dataset that consists of 60,000 data points. The cost function is the cross-entropy function J DISPLAYFORM1 where x is the observed data point sampled randomly from the dataset, y is the true label of DISPLAYFORM2 is the indicator function and P(y = i|x) is the predicted probability that y = i given the observed data point x which can be calculated using the softmax function. In other words, P(y = i|x) = e wix / j e wj x. The aim of the system is to collaboratively learn the parameter w ∈ R c×d, where c = 10 classes and d = 785 the dimension (including the bias term) that minimizes the cost function while streaming the inputs x online. We tested the performance of AMB and FMB schemes using fully distributed setup. We used a network consisting of n = 10 nodes, in which the underlying network topology is given in FIG4 of App. I.1. In all our experiments, we used t2.micro instances and ami-6b211202, a publicly available To ensure a fair comparison between the two schemes, we ran both algorithms repeatedly and for a long time and averaged the performance over the same duration. We also observed that the processors finish tasks much faster during the first hour or two before slowing significantly. After that initial period, workers enter a steady state in which they keep their processor speed relatively constant except for occasional bursts. We discarded the transient behaviour and considered the performance during the steady-state. We ran both AMB and FMB in a fully distributed setting to solve the linear regression problem. In FMB, each worker computed b = 6000 gradients. The average compute time during the steady-state phase was found to be 14.5 sec. Therefore, in AMB case, the compute time for each worker was set to be T = 14.5 sec. and we set T c = 4.5 sec. Workers are allowed r = 5 average rounds of consensus to average their calculated gradients. Figure 1(a) plots the error vs. wall time, which includes both computation and communication times. One can notice AMB clearly outperforms FMB. In fact, the total amount of time spent by FMB to finish all the epochs is larger than that spent by AMB by almost 25% as shown in FIG1 (a) (e.g., the error rate achieved by FMB after 400 sec. has already been achieved by AMB after around 300 sec.). We notice, both scheme has the same average inter-node communication times. Therefore, when ignoring inter-node communication times, this ratio increases to almost 30%. In here we perform logistic regression using n = 10 distributed nodes. The network topology is as same as above. The per-node fixed minibatch in FMB is b/n = 800 while the fixed compute time in AMB is T = 12 sec. and the communication time T c = 3 sec. As in the linear regression experiment above, the workers on average go through r = 5 round of consensus. Figures 1(b) shows the achieved cost vs. wall clock time. We observe AMB outperforms FMB by achieving the same error rate earlier. In fact, FIG1 (b) demonstrates that AMB is about 1.7 times faster than FMB. For instance, the cost achieved by AMB at 150 sec. is almost the same as that achieved by FMB at around 250 sec. We proposed a distributed optimization method called Anytime MiniBatch. A key property of our scheme is that we fix the computation time of each distributed node instead of minibatch size. Therefore, the finishing time of all nodes are deterministic and does not depend on the slowest processing node. We proved the convergence rate of our scheme in terms of the expected regret bound. We performed numerical experiments using Amazon EC2 and showed our scheme offers significant improvements over fixed minibatch schemes. A AMB ALGORITHM The pseudocode of the Anytime Minibatch scheme operating in a distributed setting is given in Algorithm 1. Line 2 is for initialization purpose. Lines 3 − 8 corresponds to the compute phase during which each node i calculates b i (t) gradients. The consensus phase steps are given in lines 9 − 21. Each node first averages the gradients (line 9) and calculates the initial messages m i (t) it will share with its neighbours (line 10). Lines 14 − 19 corresponds to the communication rounds that in distributed averaging of the dual variable z i (t + 1) (line 21). Finally, line 22 represents the update phase in which each node updates its primal variable w i (t + 1).For the hub-and-spoke configuration, one can easily modify the algorithm as only a single consensus round is required during which all workers send their gradients to the master node which calculates z(t + 1) and w(t + 1) followed by a communication from the master to the workers with the updated w(t + 1). DISPLAYFORM0 initialize g i (t) = 0, b i (t) = 0 3: DISPLAYFORM1 while current_time DISPLAYFORM2 receive input start consensus rounds 10: DISPLAYFORM3 DISPLAYFORM4 11: DISPLAYFORM5 DISPLAYFORM6 w i (t + 1) = arg min w∈W w, z i (t + 1) + β(t + 1)h(w)23: end for B PROOF OF THEOREM 2In this section, we prove Theorem 2. There are three factors impacting the convergence of our scheme; first is that gradient is calculated with respect to f (w, x) rather than directly computing the exact gradient ∇ w F (w), the second factor is the errors due to limited consensus rounds, and the last factor is that we have variable sized minibatch size over epochs. We bound these errors to find the expected regret bound with respect to a sample path. Let w(t) be the primal variable computed using the exact dual z(t), cf. 12: DISPLAYFORM0 From (, Lemma 2), we have DISPLAYFORM1 Recall that z i (t) is the dual variable after r rounds of consensus. The last step is due to Lemma 1. Let X(t) be the total set of samples processed by the end of t-th epoch: DISPLAYFORM2 Let E[·] denote the expectation over the data set X(τ) where we recall τ is the number of epochs. Note that conditioned on X(t − 1) the w i (t) and x i (t, s) are independent according to equation 7. Thus, DISPLAYFORM3 where equation 25 is due to equation 10. From equation 17 we have DISPLAYFORM4 Now, we add and subtract F (w(t)) from equation 17 to get DISPLAYFORM5 Note that equation 28 and equation 29 are due to equation 8 and equation 24. Now, we bound the first term in the following Lemma, which is proved in App. C. DISPLAYFORM6 where DISPLAYFORM7 In equation 31, the first term is a constant, which depends on the initialization. The fourth and the sixth terms are due to consensus errors and the fifth term is due to noisy gradient calculation. The second and the last term E[ψ] are due to variable minibatch sizes. Now, the total regret can be obtained by using Lemma 8 in equation 30 DISPLAYFORM8 Define γ = max t∈{1,τ −1} DISPLAYFORM9 In App. D, we bound DISPLAYFORM10 1 α(t) and DISPLAYFORM11 1 α(t)β(t) 2 terms. Using them, we have DISPLAYFORM12 Now we bound E[ψ]. Using δ = max {t,t}∈{1,τ −1} |c(t) − c(t)| in equation 32, we can write DISPLAYFORM13 By substituting equation 37 in equation 36 DISPLAYFORM14 By rearranging terms DISPLAYFORM15, then from equation 15 µτ = m and we substitute DISPLAYFORM16 This completes the proof of Theorem 2.C PROOF OF LEMMA 8Note that g(t) is calculated with respect to w i (t) by different nodes in equation 3. Letḡ(t) be the minibatch calculated with respect to w(t) (given in equation 23) by all the nodes. DISPLAYFORM17 Note that there are two types of errors in computing gradients. The first is common in any gradient based methods. That is, the gradient is calculated with respect to the function f (w, x), which is based on the data x instead of being a direct evaluation of ∇ w F (w). We denote this error as q(t): DISPLAYFORM18 The second error from the fact that we use g(t) instead ofḡ(t). We denote this error as r(t): DISPLAYFORM19 Lemma 9 The following four relations hold DISPLAYFORM20 The proof of Lemma 9 is given in App. E. Let l t (w) be the first order approximation of F (w) at w(t): DISPLAYFORM21 Letl t (w) be an approximation of l t (w) by replacing ∇ w F (w(t)) with g(t)l t (w) = F (w(t)) + g(t), w − w(t) = F (w(t)) + ∇ w F (w(t)), w − w(t) + q(t), w − w(t) + r(t), w − w(t) = l t (w) + q(t), w − w(t) + r(t), w − w(t).Note that equation 46 follows since g(t) = q(t) + r(t) + ∇ w F (w(t)). By using the smoothness of F (w), we can write DISPLAYFORM22 The last step is due to the Cauchy-Schwarz inequality. Let α(t) = β(t) − K. We add and subtract α(t) w(t + 1) − w(t) 2 /2 to find DISPLAYFORM23 Note that DISPLAYFORM24 Similarly, we have that DISPLAYFORM25 Using equation 49, equation 50, and β(t) = K + α(t) in equation 48 we have DISPLAYFORM26 The following Lemma gives a relation between w(t) andl t (w(t))Lemma 10 The optimization stated in equation 23 is equivalent to DISPLAYFORM27 By using the (, Lemma 8), we have DISPLAYFORM28 t (w(t + 1)) + (β(t))h(w(t + 1)) DISPLAYFORM29 Use equation 51 in equation 53 and substituting in β(t) = K + α(t) we get DISPLAYFORM0 where equation 54 is due to the fact that α(t + 1) ≥ α(t). Now, we use β(t) = K + α(t), multiply by c(t + 1) and rewrite DISPLAYFORM1 Summing from t = 1 to τ − 1 we get DISPLAYFORM0 Let ψ be the last two terms, i.e., DISPLAYFORM0 Then, using Lemma 10 DISPLAYFORM1 By substituting in equation 45 we continue DISPLAYFORM2 where equation 57 is due to convexity of F (w), i.e., DISPLAYFORM3. Adding and subtracting terms we find that DISPLAYFORM4 Taking the expectation with respect to X(τ − 1) DISPLAYFORM5 We use the bounds in Lemma 9 to get DISPLAYFORM6 We rewrite by rearranging terms DISPLAYFORM7 Now we bound E [ψ]. From equation 56 we find DISPLAYFORM8 where equation 59 is due to Lemma 10, equation 60 is simple substitution of equation 45, and the last step is due to convexity of F (w). Now, we take the expectation over data samples X(τ − 1) DISPLAYFORM9 DISPLAYFORM10 where Lemma 9 is used in equation 62 and the last step is due to equation 64. This completes the proof of Lemma 8. We know β(t) = K + α(t). Let α(t) = t µ. Then, we have DISPLAYFORM0 Similarly, DISPLAYFORM1 E PROOF OF LEMMA 9Note that the expectation with respect to x s (t) DISPLAYFORM2 Also we use the fact that gradient and expectation operators commutes DISPLAYFORM3 Bounding E[q(t), w * − w(t) ] and E[q(t) 2 ] follows the same approach as in (, Appendix A.1) or. Now, we find E[r(t), w DISPLAYFORM4 DISPLAYFORM5 where equation 68 is due to the Cauchy-Schwarz inequality and equation 69 due to equation 9 and D = max w,u∈W w − u. Using equation 24 DISPLAYFORM6 Now we find E[r(t) 2 ]. DISPLAYFORM7 F PROOF OF THEOREM 4By definition DISPLAYFORM8 where c i (t) is the total number of gradients computed at the node i in the t-th epoch. We assume c i (t) is independent across network and is independent and identically distributed according to some processing time distribution p across epochs. DISPLAYFORM9 Let α(t) = t/c. Now take expectation over the c(t) to get DISPLAYFORM10 The last step is due to the fact that c(t + 1) and b(t) are independent since these are in two different epochs. Further E p [E[ψ|c(t)]] = 0. After further simplification through the use of Appendix D, we get DISPLAYFORM11 Taking the expectation over c(t) in equation 30, we have DISPLAYFORM12 By definition DISPLAYFORM13 Thenm = E p =cτ. By substitutingm and rearranging we find that DISPLAYFORM14 G PROOF OF THEOREM 7Proof: Consider an FMB method in which each node computes b/n gradients per epoch, with T i (t) denoting the time taken to complete the job. Also consider AMB with a fixed epoch duration of T. The number of gradient computations completed by the i-th node in the t-th epoch is DISPLAYFORM15 Therefore, the minibatch size b(t) computed in AMB in the t-th epoch is DISPLAYFORM16 Taking the expectation over the distribution of T i (t) in FORMULA10, and applying Jensen's inequality, we find that DISPLAYFORM17 where E p [T i (t)] = mu. Fixing the computing time to T = (1 + n/b)µ we find that E p [b(t)] ≥ b, i.e., the expected minibatch of AMB is at least as large as the minibatch size b used in the FMB.The expected computing time for τ epochs in our approach is DISPLAYFORM18 In contrast, in the FMB approach the finishing time of the tth epoch is max i∈[n] T i (t). Using the of; we find that DISPLAYFORM19 where σ is the standard deviation of T i (t). Thus τ epochs takes expected time DISPLAYFORM20 Taking the ratio of the two finishing times we find that DISPLAYFORM21 For parallelization to be meaningful, the minibatch size should be much larger than number of nodes and hence b n. This means (1 + n/b) ≈ 1 for any system of interest. Thus, DISPLAYFORM22 This completes the proof of Theorem 7. The shifted exponential distribution is given by DISPLAYFORM0 where λ ≥ 0 and ζ ≥ 0. The shifted exponential distribution models a minimum time (ζ) to complete a job, and a memoryless balance of processing time thereafter. The λ parameter dictates the average processing speed, with larger λ indicating faster processing. The expected finishing time is DISPLAYFORM1 By using order statistics, we can find DISPLAYFORM2 and thus τ epochs takes expected time DISPLAYFORM3 Taking the ratio of the two finishing times we find that DISPLAYFORM4 For parallelization to be meaningful we must have much more data than nodes and hence b n. This means that the first factor in the denominator will be approximately equal to one for any system of interest. Therefore, in the large n regime, DISPLAYFORM5 which is order-log(n) since the product λζ is fixed. In this section, we present additional details regarding the numerical of Section 6 of the main paper as well as some new . In Appendix I.1, we detail the network used in Section 6 and, for a point of comparison, implement the same computations in a master-worker network topology. In Appendix I.2, we model the compute times of the nodes as shifted exponential random variables and, under this model, present contrasting AMB and FMB performance for the linear regression problem. In Appendix I.3 we present an experimental methodology for simulating a wide variety of straggler distributions in EC2. By running jobs on some of the EC2 nodes we slow the foreground job of interest, thereby simulating a heavily-loaded straggler node. Finally, in Appendix I.4, we present another experiment in which we also induce stragglers by forcing the nodes to make random pauses between two consecutive gradient calculations. We present numerical for both settings as well, demonstrating the even greater advantage of AMB versus FMB when compared to the presented in Section 6. As there was not space in the main text, in FIG4 we diagram the connectivity of the distributed computation network used in Section 6. The second largest eigenvalue of the P matrix corresponding to this network, which controls the speed of consensus, is 0.888.In Section 6, we presented for distributed logistic regression in the network depicted in FIG4. Another network topology of great interest is the hub-and-spoke topology wherein a central master node is directly connected to a number of worker nodes, and worker nodes are only indirectly connected via the master. We also ran the MNIST logistic regression experiments for this topology. In our experiments there were 20 nodes total, 19 workers and one master. As in Sec.6 we used t2.micro instances and ami-62b11202 to launch the instances. We set the total batch size used in FMB to be b = 3990 so, with n = 19 worker each worker calculated b/n = 210 gradients per batch. Working with this per-worker batch size, we found the average EC2 compute time per batch to be 3 sec. Therefore, we used a compute time of T = 3 sec. in the AMB scheme while the communication time of T c = 1 sec. Figure 3 plots the logistical error versus wall clock time for both AMB and FMB in the master-worker (i.e., hub-and-spoke) topology. We see that the workers implementing AMB far outperform those implementing FMB. In this section, we model the speed of each worker probabilistically. Let T i (t) denote the time taken by worker i to calculate a total of 600 gradients in the t-th epoch. We assume T i (t) follows a shifted exponential distribution and is independent and identically distributed across nodes (indexed by i) and across computing epochs (indexed by t). The probability density function of the shifted exponential is p Ti(t) (z) = λe −λ(z−ζ). The mean of this distribution is µ = ζ + λ −1 and its variance is λ −2. Conditioned on T i (t) we assume that worker i makes linear progress through the dataset. In other words, worker i takes kT i (t)/600 seconds to calculate k gradients. (Note that our model allows k to exceed 600.) In the simulation we present we choose λ = 2/3 and ζ = 1. In the AMB scheme, node i computes b i (t) = 600T /T i (t) gradients in epoch t where T is the fixed computing time allocated. To ensure a fair comparison between FMB and AMB, T is chosen according to Thm. 7. This means that E[b(t)] ≥ b where b(t) = i b i (t) and b is the fixed minibatch size used by FMB. Based on our parameter choices, T = (1 + n/b)µ = (1 + n/b) (λ −1 + ζ) = 2.5. generate 20 sample paths; each sample path is a set {T i (t)} for i ∈ {1, . . ., 20} and t ∈ {1, . . . 20}.At the end of each of the 20 computing epoch we conduct r = 5 rounds of consensus. As can be observed in Fig. 4, for all 20 sample paths AMB outperforms FMB. One can also observe that there for neither scheme is there much variance in performance across sample paths; there is a bit more for FMB than for AMB. Due to this small variability, in the rest of this discussion we pick a single sample path to plot for. Figures 5a and 5b help us understand the performance impact of imperfect consensus on both AMB and on FMB. In each we plot the consensus error for r = 5 rounds of consensus and perfect consensus (r = ∞). In FIG7 we plot the error versus number of computing epochs while in FIG7 we plot it versus wall clock time. In the former there is very little difference between AMB and FMB. This is due to the fact that we have set the computation times so that the expected AMB batch size equals the fixed FMB batch size. On the other hand, there is a large performance gap between the schemes when plotted versus wall clock time. It is thus in terms of real time (not epoch count) where AMB strongly outperforms FMB. In particular, AMB reaches an error rate of 10 −3 in less than half the time that it takes FMB (2.24 time faster, to be exact). In this section, we introduce a new experimental methodology for studying the effect of stragglers. In these experiments we induce stragglers amongst our EC2 micro.t2 instances by running jobs. In our experiments, there were 10 compute nodes interconnected according to the topology of FIG4. The 10 worker nodes were partitioned into three groups. In the first group we run two jobs that "interfere" with the foreground (AMB or FMB) job. The jobs we used were matrix multiplication jobs that were continuously performed during the experiment. This first group will contain the "bad" straggler nodes. In the second group we run a single job. These will be the intermediate stragglers. In the third group we do not run jobs. These will be the non-stragglers. In our experiments, there are three bad stragglers (workers 1, 2, and 3), two intermediate stragglers (workers 4 and 5), and five non-stragglers (workers 6-10).We first launch the jobs in groups one and two. We then launch the FMB jobs on all nodes at once. By simultaneously running the jobs and FMB, the resources of nodes in the first two groups are shared across multiple tasks ing in an overall slowdown in their computing. The slowdown can be clearly observed in Figure 6a which depicts the histogram of the FMB compute times. The count ("frequency") is the number of jobs (fixed mini batches) completed as a function of the time it took to complete the job. The third (fast) group is on the left, clustered around 10 seconds per batch, while the other two groups are clustered at roughly 20 and 30 seconds. Figure 6b depicts the same experiment as performed with AMB: first launching the jobs, and then launching AMB in parallel on all nodes. In this scenario compute time is fixed, so the histogram plots the number of completed batches completed as a function of batch size. In the AMB experiments the bad straggler nodes appear in the first cluster (centered around batch size of 230) while the faster nodes appear in the clusters to the right. In the FMB histogram per-worker batch size was fixed to 585 while in the AMB histograms the compute time was fixed to 12 sec. We observe that these empirical confirm the conditionally deterministic aspects of our statistical model of Appendix I.2. This was the portion of the model wherein we assumed that nodes make linear progress conditioned on the time it takes to compute one match. In Figure 6a, we observe it takes the non-straggler nodes about 10 seconds to complete one fixed-sized minibatch. It takes the intermediate nodes about twice as long. Turning to the AMB plots we observe that, indeed, the intermediate stragglers nodes complete only about 50% of the work that the non-straggler nodes do in the fixed amount of time. Hence this "linear progress" aspect of our model is confirmed experimentally. Figure 7 illustrates the performance of AMB and FMB on the MNIST regression problem in the setting of EC2 with induced stragglers. As can be observed by comparing these to those presented in FIG1 of Section 6, the speedup now effected by AMB over FMB is far larger. While in FIG1 the AMB was about 50% faster than FMB it is now about twice as fast. While previously AMB effect a reduction of 30% in the time it took FMB to hit a target error rate, the reduction now is about 50%. Generally as the variation amongst stragglers increases we will see a corresponding improvement in AMB over FMB. We conducted another experiment on a high-performance computing (HPC) platform that consists of a large number of nodes. Jobs submitted to this system are scheduled and assigned to dedicated nodes. Since nodes are dedicated, no obvious stragglers exist. Furthermore, users of this platform do not know which tasks are assigned to which node. This means that we were not able to use the same approach for inducing stragglers on this platform as we used on EC2. In EC2, we ran simulations on certain nodes to slow them down. But, since in this HPC environment we cannot tell where our jobs are placed, we are not able to place additional jobs on a subset of those same nodes to induce stragglers. Therefore, we used a different approach for inducing stragglers as we now explain. First, we ran the MNIST classification problem using 51 nodes: one master and 50 worker nodes where workers nodes were divided into 5 groups. After each gradient calculation (in both AMB and FMB), worker i pauses its computation before proceeding to the next iteration. The duration of the pause of the worker in epoch t after calculating the s-th gradient is denoted by T i (t, s). We modeled the T i (t, s) as independent of each other and each T i (t, s) is drawn according to the normal distribution N (µ j, σ 2 j) if worker i is in group j ∈. If T i (t, s) < 0, then there is no pause and the worker starts calculating the next gradient immediately. Groups with larger µ j model worse stragglers and larger σ 2 j models more variance in that straggler's delay. In AMB, if the remaining time to compute gradients is less than the sampled T i (t, s), then the duration of the pause is the remaining time. In other words, the node will not calculate any further gradients in that epoch but will pause till the end of the compute phase before proceeding to consensus rounds. In our experiment, we chose (µ 1, µ 2, µ 3, µ 4, µ 5) = and σ 2 j = j 2. In the FMB experiment, each worker calculated 10 gradients leading to a fixed minibatch size b = 500 while in AMB each worker was given a fixed compute time, T = 115 msec. which ed in an empirical average minibatch size b ≈ 504 across all epochs. Figures 8a and 8b respectively depict the histogram of the compute time (including the pauses) for FMB and the histogram of minibatch sizes for AMB obtained in our experiment. In each histogram, five distinct distributions can be discerned, each representing one of the five groups. Notice that the fastest group of nodes has the smallest average compute time (the leftmost spike in FIG11) and the largest average minibatch size (the rightmost distribution in FIG11).In FIG12, we compare the logistic regression performance of AMB with that of FMB for the MNIST data set. Note that AMB achieves its lowest cost in 2.45 sec while FMB achieves the same cost only at 12.7 sec. In other words, the convergence rate of AMB is more than five times faster than that of FMB.
Accelerate distributed optimization by exploiting stragglers.
971
scitldr
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow. Research in adversarial examples continues to contribute to the development of robust (semi-)supervised learning , data augmentation BID14 ), and machine learning understanding BID24. One important caveat of the approach pursued by much of the literature in adversarial machine learning, as discussed recently (Goodfellow, original image parametric (lighting) texture color [Athalye 17] multi-step pixel [Moosavi Dezfooli 16] parametric (geometry) one-step pixel [Goodfellow 14] Figure 1: Traditional pixel-based adversarial attacks yield unrealistic images under a larger perturbation (L ∞ -norm ≈ 0.82), however our parametric lighting and geometry perturbations output more realistic images under the same norm (more in Appendix A). Figure 2: Parametrically-perturbed images remain natural, whereas pixel-perturbed ones do not.2018; BID12, is the reliance on overly simplified attack metrics: namely, the use of pixel value differences between an adversary and an input image, also referred to as the pixel norm-balls. The pixel norm-balls game considers pixel perturbations of norm-constrained magnitude BID14, and is used to develop adversarial attackers, defenders and training strategies. The pixel norm-ball game is attractive from a research perspective due to its simplicity and well-posedness: no knowledge of image formation is required and any arbitrary pixel perturbation remains eligible (so long as it is "small", in the perceptual sense). Although the pixel norm-ball is useful for research purposes, it only captures limited real-world security scenarios. Despite the ability to devise effective adversarial methods through the direct employment of optimizations using the pixel norm-balls measure, the pixel manipulations they promote are divorced from the types of variations present in the real world, limiting their usefulness "in the wild". Moreover, this methodology leads to defenders that are only effective when defending against unrealistic images/attacks, not generalizing outside of the space constrained by pixel norm-balls. In order to consider conditions that enable adversarial attacks in the real world, we advocate for a new measurement norm that is rooted in the physical processes that underly realistic image synthesis, moving away from overly simplified metrics, e.g., pixel norm-balls. Our proposed solution -parametric norm-balls -rely on perturbations of physical parameters of a synthetic image formation model, instead of pixel color perturbations (Figure 2). To achieve this, we use a physically-based differentiable renderer which allows us to perturb the underlying parameters of the image formation process. Since these parameters indirectly control pixel colors, perturbations in this parametric space implicitly span the space of natural images. We will demonstrate two advantages that fall from considering perturbations in this parametric space: they enable adversarial approaches that more readily apply to real-world applications, and they permit the use of much more significant perturbations (compared to pixel norms), without invalidating the realism of the ing image (Figure 1). We validate that parametric norm-balls game playing is critical for a variety of important adversarial tasks, such as building defenders robust to perturbations that can occur naturally in the real world. We perform perturbations in the underlying image formation parameter space using a novel physicallybased differentiable renderer. Our renderer analytically computes the derivatives of pixel color with respect to these physical parameters, allowing us to extend traditional pixel norm-balls to physicallyvalid parametric norm-balls. Notably, we demonstrate perturbations on an environment's lighting and on the shape of the 3D geometry it shades. Our differentiable renderer achieves state-of-the-art performance in speed and scalability (Section 3) and is fast enough for rendered adversarial data augmentation (Section 5): training augmented with adversarial images generated with a renderer. Existing differentiable renders are slow and do not scalable to the volume of high-quality, highresolutions images needed to make adversarial data augmentation tractable (Section 2). Given our analytically-differentiable renderer (Section 3), we are able to demonstrate the efficacy of parametric space perturbations for generating adversarial examples. These adversaries are based on a substantially different phenomenology than their pixel norm-balls counterparts (Section 4). Ours is among the first steps towards the deployment of rendered adversarial data augmentation in real-world applications: we train a classifier with computer-generated adversarial images, evaluating the performance of the training against real photographs (i.e., captured using cameras; Section 5). We test on real photos to show the parametric adversarial data augmentation increases the classifier's robustness to "deformations" happened in the real world. Our evaluation differs from the majority of existing literature which evaluates against computer-generated adversarial images, since our parametric space perturbation is no-longer a wholly idealized representation of the image formation model but, instead, modeled against of theory of realistic image generation. Our work is built upon the fact that simulated or rendered images can participate in computer vision and machine learning on real-world tasks. Many previous works use rendered (simulated) data to train deep networks, and those networks can be deployed to real-world or even outperform the state-of-the-art networks trained on real photos (; BID6 ; BID4 BID22 b; ; BID21 . For instance, Veeravasarapu et al. (2017a) show that training with 10% real-world data and 90% simulation data can reach the level of training with full real data. even demonstrate that the network trained on synthetic data yields a better performance than using real data alone. As rendering can cheaply provide a theoretically infinite supply of annotated input data, it can generate data which is orders of magnitude larger than existing datasets. This emerging trend of training on synthetic data provides an exciting direction for future machine learning development. Our work complements these works. We demonstrate the utility of rendering can be used to study the potential danger lurking in misclassification due to subtle changes to geometry and lighting. This provides a future direction of combining with synthetic data generation pipelines to perform physically based adversarial training on synthetic data. expose the vulnerability of modern deep neural nets using purposefully-manipulated images with human-imperceptible misclassification-inducing noise. BID14 introduce a fast method to harness adversarial examples, leading to the idea of pixel norm-balls for evaluating adversarial attackers/defenders. Since then, many significant developments in adversarial techniques have been proposed BID1;; BID29; BID7;; BID5 ). Our work extends this progression in constructing adversarial examples, a problem that lies at the foundation of adversarial machine learning. BID28 study the transferability of attacks to the physical world by printing then photographing adversarial images. BID2 and BID10 propose extensions to non-planar (yet, still fixed) geometry and multiple viewing angles. These works still rely fundamentally on the direct pixel or texture manipulation on physical objects. Since these methods assume independence between pixels in the image or texture space they remain variants of pixel norm-balls. This leads to unrealistic attack images that cannot model real-world scenarios BID13 BID18 BID12. generate adversarial examples by altering physical parameters using a rendering network BID19 trained to approximate the physics of realistic image formation. This data-driven approach leads to an image formation model biased towards the rendering style present in the training data. This method also relies on differentiation through the rendering network in order to compute adversaries, which requires high-quality training on a large amount of data. Even with perfect training, in their reported performance, it still requires 12 minutes on average to find new adversaries, we only take a few seconds Section 4.1. Our approach is based on a differentiable physically-based renderer that directly (and, so, more convincingly) models the image formation process, allowing us to alter physical parameters -like geometry and lighting -and compute Table 1: Previous non-pixel attacks fall short in either the parameter range they can take derivatives or the performance. Perf. Color Normal Material Light Geo. Zeng 17 Ours derivatives (and adversarial examples) much more rapidly compared to the . We summarize the difference between our approach and the previous non-image adversarial attacks in Table 1. BID2 ), and in generalizing neural style transfer to a 3D context BID25 TAB0 ). Our renderer explicitly models the physics of the image formation processes, and so the images it generates are realistic enough to illicit correct classifications from networks trained on real-world photographs. Adversarial attacks based on pixel norm-balls typically generate adversarial examples by defining a cost function over the space of images C: I → R that enforces some intuition of what failure should look like, typically using variants of gradient descent where the gradient ∂C /∂I is accessible by differentiating through networks (; BID14 ; BID29 ; BID7 .The choices for C include increasing the cross-entropy loss of the correct class BID14, decreasing the cross-entropy loss of the least-likely class BID29, using a combination of cross-entropies , and more (; ; BID7 Tramèr et al., 2017). We combine of cross-entropies to provide flexibility for choosing untargeted and targeted attacks by specifying a different set of labels: DISPLAYFORM0 where I is the image, f (I) is the output of the classifier, L d, L i are labels which a user wants to decrease and increase the predicted confidences respectively. In our experiments, L d is the correct class and L i is either ignored or chosen according to user preference. Our adversarial attacks in the parametric space consider an image I(U, V) is the function of physical parameters of the image formation model, including the lighting U and the geometry V. Adversarial examples constructed by perturbing physical parameters can then be computed via the chain rule DISPLAYFORM1 where ∂I /∂U, ∂I /∂V are derivatives with respect to the physical parameters and we evaluate using our physically based differentiable renderer. In our experiments, we use gradient descent for finding parametric adversarial examples where the gradient is the direction of ∂I /∂U, ∂I /∂V. Rendering is the process of generating a 2D image from a 3D scene by simulating the physics of light. Light sources in the scene emit photons that then interact with objects in the scene. At each interaction, photons are either reflected, transmitted or absorbed, changing trajectory and repeating until arriving at a sensor such as a camera. A physically based renderer models the interactions mathematically , and our task is to analytically differentiate the physical process. Top 5: miniskirt 28% t-shirt 21% boot 6% crutch 5% sweatshirt 5% t-shirt 86%Top 5: water tower 48% street sign 18% mailbox 9% gas pump 3% barn 3% street sign 57% Figure 4: By changing the lighting, we fool the classifier into seeing miniskirt and water tower, demonstrating the existence of adversarial lighting.boot 100% boot 98% boot 98% sleeping bag 98% watter bottle 15% cannon 20% We develop our differentiable renderer with common assumptions in real-time rendering -diffuse material, local illumination, and distant light sources. Our diffuse material assumption considers materials which reflect lights uniformly for all directions, equivalent to considering non-specular objects. We assume that variations in the material (texture) are piece-wise constant with respect to our triangle mesh discretization. The local illumination assumption only considers lights that bounce directly from the light source to the camera. Lastly, we assume light sources are far away from the scene, allowing us to represent lighting with one spherical function. For a more detailed rationale of our assumptions, we refer readers to Appendix B). These assumptions simplify the complicated integral required for rendering BID23 and allow us to represent lighting in terms of spherical harmonics, an orthonormal basis for spherical functions analogous to Fourier transformation. Thus, we can analytically differentiate the rendering equation to acquire derivatives with respect to lighting, geometry, and texture (derivations found in Appendix C).Using analytical derivatives avoids pitfalls of previous differentiable renderers (see Section 2) and make our differentiable renderer orders of magnitude faster than the previous fully differentiable renderer OPENDR (see FIG1). Our approach is scalable to handle problems with more than 100,000 variables, while OPENDR runs out of memory for problems with more than 3,500 variables. Adversarial lighting denotes adversarial examples generated by changing the spherical harmonics lighting coefficients U BID15. As our differentiable renderer allows us to compute ∂I /∂U analytically (derivation is provided in Appendix C.4), we can simply apply the chain rule: DISPLAYFORM0 where ∂C /∂I is the derivative of the cost function with respect to pixel colors and can be obtained by differentiating through the network. Spherical harmonics act as an implicit constraint to prevent unrealistic lighting because natural lighting environments everyday life are dominated by lowfrequency signals. For instance, rendering of diffuse materials can be approximated with only 1% pixel intensity error by the first 2 orders of spherical harmonics . As computers can only represent a finite number of coefficients, using spherical harmonics for lighting implicitly filters out high-frequency, unrealistic lightings. Thus, perturbing the parametric space of spherical harmonics lighting gives us more realistic compared to image-pixel perturbations Figure 1.jaguar 61% jaguar 80% Egyptian cat 90% hunting dog 93% Figure 6: By specifying different target labels, we can create an optical illusion: a jaguar is classified as cat and dog from two different views after geometry perturbations. Adversarial geometry is an adversarial example computed by changes the position of the shape's surface. The shape is encoded as a triangle mesh with |V | vertices and |F | faces, surface points are vertex positions V ∈ R |V |×3 which determine per-face normals N ∈ R |F |×3 which in turn determine the shading of the surface. We can compute adversarial shapes by applying the chain rule: DISPLAYFORM1 ni hij vj where ∂I /∂N is computed via a derivation in Appendix E. Each triangle only has one normal on its face, making ∂N /∂V computable analytically. In particular, the 3 × 3 Jacobian of a unit face normal vector n i ∈ R 3 of the jth face of the triangle mesh V with respect to one of its corner vertices v j ∈ R 3 is DISPLAYFORM2 where h ij ∈ R 3 is the height vector: the shortest vector to the corner v j from the opposite edge. We have described how to compute adversarial examples by parametric perturbations, including lighting and geometry. In this section, we show that adversarial examples exist in the parametric spaces, then we analyze the characteristics of those adversaries and parametric norm-balls. We use 49 × 3 spherical harmonics coefficients to represent environment lighting, with an initial realworld lighting condition . Camera parameters and the images are empirically chosen to have correct initial classifications and avoid synonym sets. In Figure 4 we show that single-view adversarial lighting attack can fool the classifier (pre-trained ResNet-101 on ImageNet BID17). FIG0 shows multi-view adversarial lighting, which optimizes the summation of the cost functions for each view, thus the gradient is computed as the summation over all camera views: DISPLAYFORM0 missile 49% wing 33%Figure 7: Even if we further constrain to a lighting subspace, skylight, we can still find adversaries. If one is interested in a more specific subspace, such as outdoor lighting conditions governed by sunlight and weather, our adversarial lighting can adapt to it. In Figure 7, we compute adversarial lights over the space of skylights by applying one more chain rule to the Preetham skylight parameters (; BID16 . Details about taking these derivatives are provided in Appendix D. Although adversarial skylight exists, its low degrees of freedom (only three parameters) makes it more difficult to find adversaries. In FIG2 and Figure 9 we show the existence of adversarial geometry in both single-view and multi-view cases. Note that we upsample meshes to have >10K vertices as a preprocessing step to increase the degrees of freedom available for perturbations. Multiview adversarial geometry enables us to perturb the same 3D shape from different viewing directions, which enables us to construct a deep optical illusion: The same 3D shape are classified differently from different angles. To create the optical illusion in Figure 6, we only need to specify the L i in Equation FORMULA0 to be a dog and a cat for two different views. street sign 86% street sign 99% street sign 91% mailbox 71% mailbox 61% mailbox 51% Figure 9: We construct a single adversarial geometry that fools the classifier seeing a mailbox from different angles. To further understand parametric adversaries, we analyze how do parametric adversarial examples generalize to black-box models. In TAB2, we test 5,000 ResNet parametric adversaries on unseen networks including AlexNet BID27, DenseNet BID19, SqueezeNet , and VGG . Our shows that parametric adversarial examples also share across models. In addition to different models, we evaluate parametric adversaries on black-box viewing directions. This evaluation mimics the real-world scenario that a self-driving car would "see" a stop sign from different angles while driving. In TAB3, we randomly sample 500 correctly classified views for a given shape and perform adversarial lighting and geometry algorithms only on a subset of views, then evaluate the ing adversarial lights/shapes on all the views. The show that adversarial lights are more generalizable to fool unseen views; adversarial shapes, yet, are less generalizable.init. light adv. light init. geo. adv. geo. 9.1% 17.5% Figure 10: A quantitative comparison using parametric norm-balls shows the fact that adversarial lighting/geometry perturbations have a higher success rate (%) in fooling classifiers comparing to random perturbations in the parametric spaces. Switching from pixel norm-balls to parametric norm-balls only requires to change the normconstraint from the pixel color space to the parametric space. For instance, we can perform a quantitative comparison between parametric adversarial and random perturbations in Figure 10. We use L ∞ -norm = 0.1 to constraint the perturbed magnitude of each lighting coefficient, and L ∞ -norm = 0.002 to constrain the maximum displacement of surface points along each axis. The show how many parametric adversaries can fool the classifier out of 10,000 adversarial lights and shapes respectively. Not only do the parametric norm-balls show the effectiveness of adversarial perturbation, evaluating robustness using parametric norm-balls has real-world implications. Adversarial Geometry #pixel Runtime The inset presents our runtime per iteration for computing derivatives. An adversary normally requires less than 10 iterations, thus takes a few seconds. We evaluate our CPU PYTHON implementation and the OPENGL rendering, on an Intel Xeon 3.5GHz CPU with 64GB of RAM and an NVIDIA GeForce GTX 1080. Our runtime depends on the number of pixels requiring derivatives. We inject adversarial examples, generated using our differentiable renderer, into the training process of modern image classifiers. Our goal is to increase the robustness of these classifiers to real-world perturbations. Traditionally, adversarial training is evaluated against computer-generated adversarial images BID29; Tramèr et al., 2017). In contrast, our evaluation differs from the majority of the literature, as we evaluate performance against real photos (i.e., images captured using a camera), and not computer-generated images. This evaluation method is motivated by our goal of increasing a classifier's robustness to "perturbations" that occur in the real world and from the physical processes underlying real-world image formation. We present preliminary steps towards this objective, resolving the lack of realism of pixel norm-balls and evaluating our augmented classifiers (i.e., those trained using our rendered adversaries) against real photographs. Training We train the WideResNet (16 layers, 4 wide factor) on CIFAR-100 BID26 ) augmented with adversarial lighting examples. We apply a common adversarial training method that adds a fixed number of adversarial examples each epoch BID14 BID29. We refer readers to Appendix F for the training detail. In our experiments, we compare three training scenarios: CIFAR-100, CIFAR-100 + 100 images under random lighting, and CIFAR-100 + 100 images under adversarial lighting. Comparing to the accuracy reported in , WideResNets trained on these three cases all have comparable performance (≈ 77%) on the CIFAR-100 test set. Figure 11: Unlike much of the literature on adversarial training, we evaluate against real photos (captured by a camera), not computergenerated images. This figure illustrates a subset of our test data. Testing We create a test set of real photos, captured in a laboratory setting with controlled lighting and camera parameters: we photographed oranges using a calibrated Prosilica GT 1920 camera under different lighting conditions, each generated by projecting different lighting patterns using an LG PH550 projector. This hardware lighting setup projects lighting patterns from a fixed solid angle of directions onto the scene objects. Figure 11 illustrates samples from the 500 real photographs of our dataset. We evaluate the robustness of our classifier models according to test accuracy. Of note, average prediction accuracies over five trained WideResNets on our test data under the three training cases are 4.6%, 40.4%, and 65.8%. This supports the fact that training on rendered images can improve the networks' performance on real photographs. Our preliminary experiments motivate the potential of relying on rendered adversarial training to increase the robustness to visual phenomena present in the real-world inputs. Using parametric norm-balls to remove the lack of realism of pixel norm-balls is only the first step to bring adversarial machine learning to real-world. More evaluations beyond the lab experimental data could uncover the potential of the rendered adversarial data augmentation. Coupling the differentiable renderer with methods for reconstructing 3D scenes, such as (b;), has the potential to develop a complete pipeline for rendered adversarial training. We can take a small set of real images, constructing 3D virtual scenes which have real image statistics, using our approach to manipulate the predicted parameters to construct the parametric adversarial examples, then perform rendered adversarial training. This direction has the potential to produce limitless simulated adversarial data augmentation for real-world tasks. Our differentiable renderer models the change of realistic environment lighting and geometry. Incorporating real-time rendering techniques from the graphics community could further improve the quality of rendering. Removing the locally constant texture assumption could improve our . Extending the derivative computation to materials could enable "adversarial materials". Incorporating derivatives of the visibility change and propagating gradient information to shape skeleton could also create "adversarial poses". These extensions offer a set of tools for modeling real security scenarios. We extend our comparisons against pixel norm-balls methods (Figure 1) by visualizing the and the generated perturbations (Figure 12). We hope this figure elucidates that our parametric perturbation are more realistic several scales of perturbations.original image parametric (lighting) texture color [Athalye 17] one-step pixel [Goodfellow 14] multi-step pixel [Moosavi Dezfooli 16] Physically based rendering (PBR) seeks to model the flow of light, typically the assumption that there exists a collection of light sources that generate light; a camera that receives this light; and a scene that modulates the flow light between the light sources and camera . What follows is a brief discussion of the general task of rendering an image from a scene description and the approximations we take in order to make our renderer efficient yet differentiable. Computer graphics has dedicated decades of effort into developing methods and technologies to enable PBR to synthesize of photorealistic images under a large gamut of performance requirements. Much of this work is focused around taking approximations of the cherished Rendering equation BID23, which describes the propagation of light through a point in space. If we let u o be the output radiance, p be the point in space, ω o be the output direction, u e be the emitted radiance, u i be incoming radiance, ω i be the incoming angle, f r be the way light be reflected off the material at that given point in space we have: DISPLAYFORM0 From now on we will ignore the emission term u e as it is not pertinent to our discussion. Furthermore, because the speed of light is substantially faster than the exposure time of our eyes, what we perceive is not the propagation of light at an instant, but the steady state solution to the rendering equation evaluated at every point in space. Explicitly computing this steady state is intractable for our applications and will mainly serve as a reference for which to place a plethora of assumptions and simplifications we will make for the sake of tractability. Many of these methods focus on ignoring light with nominal effects on the final rendered image vis a vis assumptions on the way light travels. For instance, light is usually assumed to have nominal interacts with air, which is described as the assumption that the space between objects is a vacuum, which constrains the interactions of light to the objects in a scene. Another common assumption is that light does not penetrate objects, which makes it difficult to render objects like milk and human skin 1. This constrains the complexity of light propagation to the behavior of light bouncing off of object surfaces. Figure 14: Rasterization converts a 3D scene into pixels. It is common to see assumptions that limit number of bounces light is allowed. In our case we chose to assume that the steady state is sufficiently approximated by an extremely low number of iterations: one. This means that it seems sufficient to model the lighting of a point in space by the light sent to it directly by light sources. Working with such a strong simplification does, of course, lead to a few artifacts. For instance, light occluded by other objects is ignored so shadows disappear and auxiliary techniques are usually employed to evaluate shadows .When this assumption is coupled with a camera we approach what is used in standard rasterization systems such as OPENGL , which is what we use. These systems compute the illumination of a single pixel by determining the fragment of an object visible through that pixel and only computing the light that traverses directly from the light sources, through that fragment, to that pixel. The lighting of a fragment is therefore determined by a point and the surface normal at that point, so we write the fragment's radiance as R(p, n, DISPLAYFORM0 Lambertian Non-Lambertian Each point on an object has a model approximating the transfer of incoming light to a given output direction f r, which is usually called the material. On a single object the material parameters may vary quite a bit and the correspondence between points and material parameters is usually called the texture map which forms the texture of an object. There exists a wide gamut of material models, from mirror materials that transport light from a single input direction to a single output direction, to materials that reflect light evenly in all directions, to materials liked brushed metal that reflect differently along different angles. For the sake of document we only consider diffuse materials, also called Lambertian materials, where we assume that incoming light is reflected uniformly, i.e f r is a constant function with respect to angle, which we denote f r (p, DISPLAYFORM0 This function ρ is usually called the albedo, which can be perceived as color on the surface for diffuse material, and we reduce our integration domain to the upper hemisphere Ω(n) in order to model light not bouncing through objects. Furthermore, since only the only ω and u are the incoming ones we can now suppress the "incoming" in our notation and just use ω and u respectively. The illumination of static, distant objects such as the ground, the sky, or mountains do not change in any noticeable fashion when objects in a scene are moved around, so u can be written entirely in terms of ω, u(p, ω) = u(ω). If their illumination forms a constant it seems prudent to pre-compute or cache their contributions to the illumination of a scene. This is what is usually called environment mapping and they fit in the rendering equation as a representation for the total lighting of a scene, i.e the total incoming radiance u i. Because the environment is distant, it is common to also assume that the position of the object receiving light from an environment map does not matter so this simplifies u i to be independent of position: DISPLAYFORM0 Despite all of our simplifications, the inner integral is still a fairly generic function over S 2. Many techniques for numerically integrating the rendering equation have emerged in the graphics community and we choose one which enables us to perform pre-computation and select a desired spectral accuracy: spherical harmonics. Spherical harmonics are a basis on S 2 so, given a spherical harmonics expansion of the integrand, the evaluation of the above integral can be reduced to a weighted product of coefficients. This particular basis is chosen because it acts as a sort of Fourier basis for functions on the sphere and so the bases are each associated with a frequency, which leads to a convenient multi-resolution structure. In fact, the rendering of diffuse objects under distant lighting can be 99% approximated by just the first few spherical harmonics bases .We will only need to note that the spherical harmonics bases Y m l are denoted with the subscript with l as the frequency and that there are 2l + 1 functions per frequency, denoted by superscripts m between −l to l inclusively. For further details on them please take a glance at Appendix C.If we approximate a function f in terms of spherical harmonics coefficients f ≈ lm f l,m Y m l the integral can be precomputed as DISPLAYFORM0 Thus we have defined a reduced rendering equation that can be efficiently evaluated using OPENGL while maintaining differentiability with respect to lighting and vertices. In the following appendix we will derive the derivatives necessary to implement our system. Rendering computes an image of a 3D shape given lighting conditions and the prescribed material properties on the surface of the shape. Our differentiable renderer assumes Lambertian reflectance, distant light sources, local illumination, and piece-wise constant textures. We will discuss how to explicitly compute the derivatives used in the main body of this text. Here we give a detailed discussion about spherical harmonics and their advantages. Spherical harmonics are usually defined in terms of the Legendre polynomials, which are a class of orthogonal polynomials defined by the recurrence relation DISPLAYFORM0 The associated Legendre polynomials are a generalization of the Legendre polynomials and can be fully defined by the relations DISPLAYFORM1 Using the associated Legendre polynomials P m l we can define the spherical harmonics basis as DISPLAYFORM2 where DISPLAYFORM3 We will use the fact that the associated Legendre polynomials correspond to the spherical harmonics bases that are rotationally symmetric along the z axis (m = 0).In order to incorporate spherical harmonics into Equation 8, we change the integral domain from the upper hemisphere Ω(n) back to S 2 via a max operation DISPLAYFORM4 We see that the integral is comprised of two components: a lighting component u(ω) and a component that depends on the normal max(ω · n, 0). The strategy is to pre-compute the two components by projecting onto spherical harmonics, and evaluating the integral via a dot product at runtime, as we will now derive. Approximating the lighting component u(ω) in Equation 19 using spherical harmonics Y m l up to band n can be written as DISPLAYFORM0 where U l,m ∈ R are coefficients. By using the orthogonality of spherical harmonics we can use evaluate these coefficients as an integral between u(ω) and Y DISPLAYFORM1 So far we have only considered the shading of a specific point p with surface normal n. If we consider the rendered image I given a shape V, lighting U, and camera parameters η, the image I is the evaluation of the rendering equation R of each point in V visible through each pixel in the image. This pixel to point mapping is determined by η. Therefore, we can write I as DISPLAYFORM2 where N (V) is the surface normal. We exploit the notation and use ρ(V, η) to represent the texture of V mapped to the image space through η. For our applications we must differentiate Equation 25 with respect to lighting and material parameters. The derivative with respect to the lighting coefficients U can be obtained by DISPLAYFORM0 This is the Jacobian matrix that maps from spherical harmonics coefficients to pixels. The term ∂F /∂U l,m can then be computed as DISPLAYFORM1 The derivative with respect to texture is defined by DISPLAYFORM2 Note that we assume texture variations are piece-wise constant with respect to our triangle mesh discretization. To model possible outdoor daylight conditions, we use the analytical Preetham skylight model . This model is calibrated by atmospheric data and parameterized by two intuitive parameters: turbidity τ, which describes the cloudiness of the atmosphere, and two polar angles θ s ∈ [0, π/2], φ s ∈ [0, 2π], which are encode the direction of the sun. Note that θ s, φ s are not the polar angles θ, φ for representing incoming light direction ω in u(ω). The spherical harmonics representation of the Preetham skylight is presented in BID16 as DISPLAYFORM0 This is derived by first performing a non-linear least squares fit to write U l,m as a polynomial of θ s and τ which lets them solve forŨ l,m (θ DISPLAYFORM1 where (p l,m) i,j are scalar coefficients, then U l,m (θ s, φ s, τ) can be computed by applying a spherical harmonics rotation with φ s using DISPLAYFORM2 We refer the reader to for more detail. For the purposes of this article we just need the above form to compute the derivatives. The derivatives of the lighting with respect to the skylight parameters (θ s, φ s, τ) are DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 We assume the texture variations are piece-wise constant with respect to our triangle mesh discretization and omit the first term ∂ρ/∂V as the magnitude is zero. Computing ∂N /∂V is provided in Section 3.2. Computing ∂F /∂Ni on face i is DISPLAYFORM3 where the ∂Y m l /∂Ni is the derivative of the spherical harmonics with respect to the face normal N i. To begin this derivation recall the relationship between a unit normal vector n = (n x, n y, n z) and its corresponding polar angles θ, φ θ = cos other experiments, using the real-world lighting data provided in . Our stepsize for computing adversaries is 0.05 along the direction of lighting gradients. We run our adversarial lighting iterations until fooling the network or reaching the maximum 30 iterations to avoid too extreme lighting conditions, such as turning the lights off. Our random lighting examples are constructed at each epoch by randomly perturb the lighting coefficients ranging from -0.5 to 0.5.When training the 16-layers WideResNet with wide-factor 4, we use batch size 128, learning rate 0.125, dropout rate 0.3, and the standard cross entropy loss. We implement the training using PYTORCH , with the SGD optimizer and set the Nesterov momentum 0.9, weight decay 5e-4. We train the model for 150 epochs and use the one with best accuracy on the validation set. FIG7 shows examples of our adversarial lights at different training stages. In the early stages, the model is not robust to different lighting conditions, thus small lighting perturbations are sufficient to fool the model. In the late stages, the network becomes more robust to different lightings. Thus it requires dramatic changes to fool a model or even fail to fool the model within 30 iterations. G EVALUATE RENDERING QUALITY We evaluated our rendering quality by whether our rendered images are recognizable by models trained on real photographs. Although large 3D shape datasets, such as ShapeNet BID4, are available, they do not have have geometries or textures at the resolutions necessary to create realistic renderings. We collected 75 high-quality textured 3D shapes from cgtrader.com and turbosquid.com to evaluate our rendering quality. We augmented the shapes by changing the field of view, s, and viewing directions, then keep the configurations that were correctly classified by a pre-trained ResNet-101 on ImageNet. Specifically, we place the centroid, calculated as the weighted average of the mesh vertices where the weights are the vertex areas, at the origin and normalize shapes to range -1 to 1; the field of view is chosen to be 2 and 3 in the same unit with the normalized shape; images include plain colors and real photos, which have small influence on model predictions; viewing directions are chosen to be 60 degree zenith and uniformly sampled 16 views from 0 to 2π azimuthal angle. In FIG8, we show that the histogram of model confidence on the correct labels over 10,000 correctly classified rendered images from our differentiable renderer. The confidence is computed using softmax function and the show that our rendering quality is faithful enough to be recognized by models trained on natural images.
Enabled by a novel differentiable renderer, we propose a new metric that has real-world implications for evaluating adversarial machine learning algorithms, resolving the lack of realism of the existing metric based on pixel norms.
972
scitldr
Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of two of the most paradigmatic architectures exemplifying these mechanisms: the Transformer and the Neural GPU . We show both models to be Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. In particular, neither the Transformer nor the Neural GPU requires access to an external memory to become Turing complete. Our study also reveals some minimal sets of elements needed to obtain these completeness . There is an increasing interest in designing neural network architectures capable of learning algorithms from examples BID6 BID7 BID10 BID11 BID13 BID4. A key requirement for any such an architecture is thus to have the capacity of implementing arbitrary algorithms, that is, to be Turing complete. Turing completeness often follows for these networks as they can be seen as a control unit with access to an unbounded memory; as such, they are capable of simulating any Turing machine. On the other hand, the work by has established a different way of looking at the Turing completeness of neural networks. In particular, their work establishes that recurrent neural networks (RNNs) are Turing complete even if only a bounded number of resources (i.e., neurons and weights) is allowed. This is based on two conditions: the ability of RNNs to compute internal dense representations of the data, and the mechanisms they use for accessing such representations. Hence, the view proposed by Siegelmann & Sontag shows that it is possible to release the full computational power of RNNs without arbitrarily increasing its model complexity. Most of the early neural architectures proposed for learning algorithms correspond to extensions of RNNs -e.g., Neural Turing Machines BID6 ) -, and hence they are Turing complete in the sense of Siegelmann & Sontag. However, a recent trend has shown the benefits of designing networks that manipulate sequences but do not directly apply a recurrence to sequentially process their input symbols. Architectures based on attention or convolutions are two prominent examples of this approach. In this work we look at the problem of Turing completenessà la Siegelmann & Sontag for two of the most paradigmatic models exemplifying these features: the Transformer and the Neural GPU BID11.The main contribution of our paper is to show that the Transformer and the Neural GPU are Turing complete based on their capacity to compute and access internal dense representations of the data. In particular, neither the Transformer nor the Neural GPU requires access to an external additional memory to become Turing complete. Thus the completeness holds for bounded architectures (bounded number of neurons and parameters). To prove this we assume that internal activations are represented as rational numbers with arbitrary precision. For the case of the Transformer we provide a direct simulation of a Turing machine, while for the case of the Neural GPU our follows by simulating standard sequence-to-sequence RNNs. Our study also reveals some minimal sets of elements needed to obtain these completeness . The computational power of Transformers and of Neural GPUs has been compared in the current literature BID4, but both are only informally used. Our paper provides a formal way of approaching this comparison. For the sake of space, we only include sketch of some proofs in the body of the paper. The details for every proof can be found in the appendix. Background work The study of the computational power of neural networks can be traced back to BID14 which established an analogy between neurons with hard-threshold activations and first order logic sentences, and BID12 that draw a connection between neural networks and finite automata. As mentioned earlier, the first work showing the Turing completeness of finite neural networks with linear connections was carried out by BID18 1995). Since being Turing complete does not ensure the ability to actually learn algorithms in practice, there has been an increasing interest in enhancing RNNs with mechanisms for supporting this task. One strategy has been the addition of inductive biases in the form of external memory, being the Neural Turing Machine (NTM) BID6 ) a paradigmatic example. To ensure that NTMs are differentiable, their memory is accessed via a soft attention mechanism. Other examples of architectures that extend RNNs with memory are the Stack-RNN BID10, and the (De)Queue-RNNs BID7. By Siegelmann & Sontag's , all these architectures are Turing complete. The Transformer architecture is almost exclusively based on the attention mechanism, and it has achieved state of the art on many language-processing tasks. While not initially designed to learn general algorithms, BID4 have advocated the need for enriching its architecture with several new features as a way to learn general procedures in practice. This enrichment is motivated by the empirical observation that the original Transformer architecture struggles to generalize to input of lengths not seen during training. We, in contrast, show that the original Transformer architecture is Turing complete, based on different considerations. These do not contradict each other, but show the differences that may arise between theory and practice. For instance, BID4 assume fixed precision, while we allow arbitrary internal precision during computation. We think that both approaches can be complementary as our theoretical can shed light on what are the intricacies of the original architecture, which aspects of it are candidates for change or improvement, and which others are strictly needed. For instance, our proof uses hard attention while the Transformer is often trained with soft attention . See Section 3.3 for a discussion on these differences. The Neural GPU is an architecture that mixes convolutions and gated recurrences over tridimensional tensors. It has been shown that NeuralGPUs are powerful enough to learn decimal multiplication from examples BID5, being the first neural architecture capable of solving this problem end-to-end. The similarity of Neural GPUs and cellular automata has been used as an argument to state the Turing completeness of the architecture BID11 BID16. Cellular automata are Turing complete (; BID15 and their completeness is established assuming an unbounded number of cells. In the Neural GPU architecture, in contrast, the number of cells that can be used during a computation is proportional to the size of the input sequence BID11 . One can cope with the need for more cells by padding the Neural GPU input with additional (dummy) symbols, as much as needed for a particular computation. Nevertheless, this is only a partial solution, as for a Turing-complete model of computation, one cannot decide a priori how much memory is needed to solve a particular problem. Our in this paper are somehow orthogonal to the previous argument; we show that one can leverage the dense representations of the Neural GPU cells to obtain Turing completeness without requiring to add cells beyond the ones used to store the input. We assume all weights and activations to be rational numbers of arbitrary precision. Moreover, we only allow the use of rational functions with rational coefficients. Most of our positive make use of the piecewise-linear sigmoidal activation function σ: Q → Q, which is defined as σ(x) = 0 x < 0, x 0 ≤ x ≤ 1, 1 x > 1.We are mostly interested in sequence-to-sequence (seq-to-seq) neural network architectures that we next formalize. A seq-to-seq network N receives as input a sequence X = (x 1, . . ., x n) of vectors x i ∈ Q d, for some d > 0, and produces as output a sequence Y = (y 1, . . ., y m) of vectors DISPLAYFORM0 Most of these types of architectures require a seed vector s and some stopping criterion for determining the length of the output. The latter is usually based on the generation of a particular output vector called an end of sequence mark. In our formalization instead, we allow a network to produce a fixed number r ≥ 0 of output vectors. Thus, for convenience we see a general seq-toseq network as a function N such that the value N (X, s, r) corresponds to an output sequence of the form Y = (y 1, y 2, . . ., y r). With this definition, we can view every seq-to-seq network as a language recognizer of strings as follows. Definition 2.1. A seq-to-seq language recognizer is a tuple A = (Σ, f, N, s, F), where Σ is a finite alphabet, f: Σ → Q d is an embedding function, N is a seq-to-seq network, s ∈ Q d is a seed vector, and F ⊆ Q d is a set of final vectors. We say that A accepts the string w ∈ Σ *, if there exists an integer r ∈ N such that N (f (w), s, r) = (y 1, . . ., y r) and y r ∈ F. The language accepted by A, denoted by L(A), is the set of all strings accepted by A.We impose two additional restrictions over recognizers. The embedding function f: Σ → Q d should be computed by a Turing machine in time linear w.r.t. the size of Σ. This covers the two most typical ways of computing input embeddings from symbols: the one-hot encoding, and embeddings computed by fixed feed-forward networks. Moreover, the set F should also be recognizable in lineartime; given a vector f, the membership f ∈ F should be decided by a Turing machine working in linear time with respect to the size (in bits) of f. This covers the usual way of checking equality with a fixed end-of-sequence vector. We impose these restrictions to disallow the possibility of cheating by encoding arbitrary computations in the input embedding or the stop condition, while being permissive enough to construct meaningful embeddings and stoping criterions. Finally, a class N of seq-to-seq neural network architectures defines the class L N composed of all the languages accepted by language recognizers that use networks in N. From these notions, the formalization of Turing completeness of a class N naturally follows. Given an input sequence X = (x 1, . . ., x n), a seed vector y 0, and r ∈ N, an encoder-decoder RNN is given by the following two recursions DISPLAYFORM1 where V, W, U, R are matrices, b 1 and b 2 are vectors, O(·) is an output function, and f 1 and f 2 are activations functions. Equation is called the RNN-encoder and the RNN-decoder. The next Theorem follows by inspection of the proof by BID18 1995) after adapting it to our formalization of encoder-decoder RNNs. BID18 1995) ). The class of encoder-decoder RNNs is Turing complete. Turing completeness holds even if we restrict to the class in which R is the zero matrix, b 1 and b 2 are the zero vector, O(·) is the identity function, and f 1 and f 2 are the piecewise-linear sigmoidal activation σ. In this section we present a formalization of the Transformer architecture , abstracting away from specific choices of functions and parameters. Our formalization is not meant to produce an efficient implementation of the Transformer, but to provide a simple setting over which its mathematical properties can be established in a formal way. The Transformer is heavily based on the attention mechanism introduced next. Consider a scoring function score: DISPLAYFORM0 Usually, q is called the query, K the keys, and V the values. We do not pose any restriction on the scoring and normalization functions, as some of our hold in general. We only require the normalization function to satisfy that there is a function f ρ from Q to Q + such that for each x = (x 1, . . ., x n) ∈ Q n it is the case that the i-th DISPLAYFORM1. Thus, a in Equation is a convex combination of the vectors in V. When proving possibility , we will need to pick specific scoring and normalization functions. A usual choice for the scoring function is a feed forward network with input (q, k i) sometimes called additive attention. Another possibility is to use the dot product q, k i called multiplicative attention . We use a combination of both: multiplicative attention plus a non linear function. For the normalization function, softmax is a standard choice. Nevertheless, in our proofs we use the hardmax function, which is obtained by setting f hardmax (x i) = 1 if x i is the maximum value, and f hardmax (x i) = 0 otherwise. Thus, for a vector x in which the maximum value occurs r times, we have that hardmax i (x) = 1 r if x i is the maximum value of x, and hardmax i (x) = 0 otherwise. We call it hard attention whenever hardmax is used as normalization function. As customary, for a function F: DISPLAYFORM2 Transformer Encoder and Decoder A single-layer encoder of the Transformer is a parametric function Enc(X; θ) receiving a sequence X = (x 1, . . ., x n) of vectors in Q d and returning a sequence Z = (z 1, . . ., z n) of the same length of vectors in Q d. In general, we consider the parameters in Enc(X; θ) as functions Q(·), K(·), V (·), and O(·), all of them from Q d to Q d. The single-layer encoder is then defined as follows DISPLAYFORM3 In practice Q(·), K(·), V (·) are typically matrix multiplications, and O(·) a feed-forward network. The + x i and + a i summands are usually called residual connections BID8 b). When the particular functions used as parameters are not important, we simply write Z = Enc(X).The Transformer encoder is defined simply as the repeated application of single-layer encoders (with independent parameters), plus two final transformation functions K(·) and V (·) applied to every vector in the output sequence of the final layer. Thus the L-layer Transformer encoder is defined by the following recursion (with 1 ≤ ≤ L − 1 and X 1 = X). DISPLAYFORM4 to denote an L-layer Transformer encoder over the sequence X.A single-layer decoder is similar to a single-layer encoder but with additional attention to an external pair of key-value vectors (K e, V e). The input for the single-layer decoder is a sequence Y = (y 1, . . ., y k) plus the external pair (K e, V e), and the output is a sequence Z = (z 1, . . ., z k). When defining a decoder layer we denote by Y j the sequence (y 1, . . ., y j), for 1 ≤ j ≤ k. The layer is also parameterized by four functions Q(·), K(·), V (·) and O(·) and is defined as follows. DISPLAYFORM5 considers the subsequence of Y only until index i and is used to generate a query p i to attend the external pair (K e, V e). We denote the single-decoder layer by Dec((K e, V e), Y; θ).The Transformer decoder is a repeated application of single-layer decoders, plus a transformation function F: Q d → Q d applied to the final vector of the decoded sequence. Thus, the output of the decoder is a single vector z ∈ Q d. Formally, the L-layer Transformer decoder is defined as DISPLAYFORM6 We use z = TDec L ((K e, V e), Y ) to denote an L-layer Transformer decoder. The complete Tansformer A Transformer network receives an input sequence X, a seed vector y 0, and a value r ∈ N. Its output is a sequence Y = (y 1, . . ., y r) defined as y t+1 = TDec(TEnc(X), (y 0, y 1, . . ., y t)), for 0 ≤ t ≤ r − 1. We denote the output sequence of the transformer as Y = (y 1, y 2, . . ., y r) = Trans(X, y 0, r). The Transformer, as defined above, is order-invariant: two input sequences that are permutations of each other produce exactly the same output. This is a consequence of the following property of the attention function: if K = (k 1, . . ., k n), V = (v 1, . . ., v n), and π: {1, . . ., n} → {1, . . ., n} is a permutation, then Att(q, K, V) = Att(q, π(K), π(V)) for every query q. This weakness has motivated the need for including information about the order of the input sequence by other means; in particular, this is often achieved by using the so-called positional encodings (; BID17, which we study below. But before going into positional encodings, a natural question is what languages the Transformer can recognize without them. As a standard yardstick we use the well-studied class of regular languages, i.e., languages recognized by finite automata. Order-invariance implies that not every regular language can be recognized by a Transformer network. As an example, there is no Transformer network that can recognize the regular language (ab) *, as the latter is not order-invariant. A reasonable question then is whether the Transformer can express all regular languages which are order-invariant. It is possible to show that this is not the case by proving that the Transformer actually satisfies a stronger invariance property, which we call proportion invariance. For a string w ∈ Σ * and a symbol a ∈ Σ, we use prop(a, w) to denote the ratio between the number of times that a appears in w and the length of w. Consider now the set PropInv(w) = {u ∈ Σ * | prop(a, w) = prop(a, u) for every a ∈ Σ}. Proposition 3.1. Let Trans be a Transformer, s a seed, r ∈ N, and f: Σ → Q d an embedding function. Then Trans(f (w), s, r) = Trans(f (u), s, r), for each u, w ∈ Σ * with u ∈ PropInv(w).As an immediate corollary we obtain the following. Corollary 3.2. Consider the order-invariant regular language L = {w ∈ {a, b} * | w has an even number of a symbols}. Then L cannot be recognized by a Transformer network. On the other hand, languages recognized by Transformer networks are not necessarily regular. Proposition 3.3. There is a Transformer network that recognizes the non-regular language S = {w ∈ {a, b} * | w has strictly more symbols a than symbols b}.That is, the computational power of Transformer networks without positional encoding is both rather weak (they do not even contain order-invariant regular languages) and not so easy to capture (as they can express counting properties that go beyond regularity). As we show in the next section, the inclusion of positional encodings radically changes the picture. Thus, given an input string w = a 1 a 2 · · · a n ∈ Σ, the of the embedding function f pos (w) provides a "new" input f pos (a 1, 1), f pos (a 2, 2),..., f pos (a n, n) to the Transformer encoder. Similarly, the Transformer decoder instead of receiving the sequence Y = (y 0, y 1, . . ., y t) as input, it receives now the sequence Y = y 0 + pos, y 1 + pos,..., y t + pos(t + 1)As for the case of the embedding functions, we require the positional encoding pos(i) to be computable by a Turing machine working in linear time w.r.t. the size (in bits) of i. The main of this section is the completeness of Transformers with positional encodings. DISPLAYFORM0. DISPLAYFORM1... DISPLAYFORM2 attends to the encoder and copies the corresponding symbol uses self attention to compute next state and the next symbol under M's head Proof Sketch. We show that for every Turing machine M = (Q, Σ, δ, q init, F) there exists a transformer that simulates the complete execution of M. We represent a string w = s 1 s 2 · · · s n ∈ Σ * as a sequence X of one-hot vectors with their corresponding positional encodings. Denote by q (t) ∈ Q the state of M at time t when processing w, and s (t) ∈ Σ the symbol under M's head at time t. Similarly, v (t) ∈ Σ is the symbol written by M and m (t) ∈ {←, →} the head direction. We next describe how to construct a transformer Trans M that with input X produces a sequence y 0, y 1, y 2,... such that y i contains information about q (i) and s (i) (encoded as one-hot vectors). DISPLAYFORM3 The construction and proof goes by induction. Assume the decoder receives y 0,..., y t such that y i contains q (i) and s (i). To construct y t+1, in the first layer we just implement M's transition function δ; note that δ((t+1) which is the index to which M is going to be pointing to in the next time step. By using the residual connections we also store q (i+1) and v (i) in z 2 i. The final piece of our construction is to compute the symbol that the tape holds at index c (t+1), that is, the symbol under M's head at time t + 1. For this we use the following observation: the symbol at index c (t+1) in time t + 1 coincides with the last symbol written by M at index c (t+1). Thus, we need to find the maximum value i ≤ t such that c (i) = c (t+1) and then copy v (i) which is the symbol that was written by M at time step i. This last computation can also be done with a self-attention layer. Thus, we attend directly to position i (hard attention plus positional encodings) and copy v (i) which is exactly s (t+1). We finally copy q (t+1) and s (t+1) into the output to construct y t+1. FIG2 shows a high-level diagram of the decoder computation. DISPLAYFORM4 There are several other details in the construction, in particular, at the beginning of the computation (first n steps), the decoder needs to attend to the encoder and copy the input symbols so they can later be processed as described above. Another detail is when M reaches a cell that has not been visited before, then the symbol under the head has to be set as # (the blank symbol). We show that all these decisions can be implemented with feed-forward networks plus attention. The complete construction uses one encoder layer, three decoder layers and vectors of dimension d = 2|Q| + 4|Σ| + 11 to store one-hot representations of states, symbols and some additional working space. All details can be found in the appendix. Although the general architecture that we presented closely follows that of , some choices for functions and parameters in our positive are different to the usual choices in practice. For instance, we use hard attention which allow us to attend directly to specific positions. In contrast, use softmax to attend, plus sin-cos functions as positional encodings. The softmax, sin and cos are not rational functions, and thus, are forbidden in our formalization. An interesting line for future work is to consider arbitrary functions but with additional restrictions, such as finite precision as done by. Another difference is that for the function O(·) in Equation The need of arbitrary precision Our Turing-complete proof relies on having arbitrary precision for internal representations, in particular, for storing and manipulating positional encodings. Although having arbitrary precision is a standard assumption when studying the expressive power of neural networks BID3 ) practical implementations rely on fixed precision hardware. If fixed precision is used, then positional encodings can be seen as functions of the form pos: N → A where A is a finite subset of Q d. Thus, the embedding function f pos can be seen as a regular embedding function f: Σ → Q d where Σ = Σ × A. Thus, whenever fixed precision is used, the net effect of having positional encodings is to just increase the size of the input alphabet. Then from Proposition 3.1 we obtain that the Transformer with positional encodings and fixed precision is not Turing complete. Although no longer Turing complete, one can still study the computational power of fixed-precision Transformers. We left this as future work. The Neural GPU BID11 is an architecture that mixes convolutions and gated recurrences over tridimensional tensors. It is parameterized by three functions U (·) (update function), R(·) (reset function), and F (·). Given a tensor S ∈ Q h×w×d and a value r ∈ N, it produces a sequence S 1, S 2,..., S r given by the following recursive definition (with S 0 = S). DISPLAYFORM0 where denotes the element-wise product, and 1 is a tensor with only 1's. Neural GPUs force functions U (·) and R(·) to produce a tensor of the same shape as its input with all values in. Thus, a Neural GPU resembles a gated recurrent unit, with U working as the update gate and R as the reset gate. Functions U (·), R(·), and F (·) are defined as a convolution of its input with a 4-dimensional kernel bank with shape (k H, k W, d, d) plus a bias tensor, followed by a point-wise transformation DISPLAYFORM1 with different kernels and biases for U (·), R(·), and F (·).To have an intuition on how the convolution K * S works, it is illustrative to think of S as an (h × w)-grid of (row) vectors and DISPLAYFORM2, and K ij = K i,j,:,:, then K * S is a regular two-dimensional convolution in which scalar multiplication has been replaced by vector-matrix multiplication as in the following expression DISPLAYFORM3 where DISPLAYFORM4 This intuition makes evident the similarity between Neural GPUs and cellular automata: S is a grid of cells, and in every iteration each cell is updated considering the values of its neighbors according to a fixed rule given by K BID11. As customary, we assume zero-padding when convolving outside S. To study the computational power of Neural GPUs, we cast them as a standard seq-to-seq architecture. Given an input sequence, we put every vector in the first column of the tensor S. We also need to pick a special cell of S as the output cell from which we read the output vector in every iteration. We pick the last cell of the first column of S. Formally, given a sequence X = (x 1, . . ., x n) with x i ∈ Q d, and a fixed value w ∈ N, we construct the tensor S ∈ Q n×w×d by leting S i,1,: = x i and S i,j,: = 0 for j > 1. The output of the Neural GPU, denoted by NGPU(X, r), is the sequence of vectors Y = (y 1, y 2, . . ., y r) such that y t = S t n,1,:. Given this definition, we can naturally view the Neural GPUs as language recognizers (as formalized in Section 2).Since the bias tensor B in Equation FORMULA15 is of the same size than S, the number of parameters in a Neural GPU grows with the size of the input. Thus, a Neural GPU cannot be considered as a fixed architecture. To tackle this issue we introduce the notion of uniform Neural GPU, as one in which for every bias B there exists a matrix B ∈ Q w×d such that B i,:,: = B for each i. Thus, uniform Neural GPUs can be finitely specified (as they have a constant number of parameters, not depending on the length of the input). We now establish the Turing completeness of this model. Theorem 4.1. The class of uniform Neural GPUs is Turing complete. Proof sketch. The proof is based on simulating a seq-to-seq RNN; thus, completeness follows from Theorem 2.3. Consider an RNN encoder-decoder language recognizer, such that N is of dimension d and its encoder and decoder are defined by the equations h i = σ(x i W + h i−1 V) and g t = σ(g t−1 U), respectively, where g 0 = h n and n is the length of the input. We use a Neural GPU with input tensor S ∈ Q n×1×3d+3. Let E i = S i,1,1:d and D i = S i,1,d+1:2d. The idea is to use E for the encoder and D for the decoder. We use kernel banks of shape (2, 1, 3d + 3, 3d + 3) with uniform bias tensors to simulate the following computation. In every step t, we first compute the value of σ(E t W + E t−1 V) and store it in E t, and then reset E t−1 to zero. Similarly, in step t we update the vector in position D t−1 storing in it the value σ(D t−1 U + E t−1 U) (for the value of E t−1 before the reset). We use the gating mechanism to ensure a sequential update of the cells such that at time t we update only positions E i and D j for i ≤ t and j ≤ t − 1. Thus the updates on the D are always one iteration behind the update of E. Since the vectors in D are never reset to zero, they keep being updated which allows us to simulate an arbitrary long computation. In particular we prove that at iteration t it holds that E t = h t, and at iteration n + t it holds that D n = g t. We require 3d + 3 components, as we need to implement several gadgets for properly using the update and reset gates. In particular, we need to store the value of E t−1 before we reset it. The detailed construction and the correctness proof can be found in the appendix. The proof above makes use of kernels of shape (2, 1, d, d) to obtain Turing completeness. This is, in a sense, optimal, as one can easily prove that Neural GPUs with kernels of shape (1, 1, d, d) are not Turing complete, regardless of the size of d. In fact, for kernels of this shape the value of a cell of S at time t depends only on the value of the same cell in time t − 1.Zero padding vs circular convolution The proof of Theorem 4.1 requires the application of zero padding in convolution. This allows us to clearly differentiate internal cells from cells corresponding to the endpoints of the input sequence. Interestingly, Turing-completeness is lost if we replace zero padding with circular convolution. Formally, given S ∈ Q h×w×d, a circular convolution is obtained by defining S h+n,:,: = S n,:,: for n ∈ Z. One can prove that uniform Neural GPUs with circular convolutions cannot differentiate among periodic sequences of different length; in particular, they cannot check if a periodic input sequence is of even or odd length. This yields the following: Proposition 4.2. Uniform Neural GPUs with circular convolutions are not Turing complete. Related to this last is the empirical observation by BID16 that Neural GPUs that learn to solve hard problems, e.g., binary multiplication, and which generalize to most of the inputs, struggle with highly symmetric (and nearly periodic) inputs. Actually, BID16 exhibit examples of the form 11111111 × 11111111 failing for all inputs with eight or more 1s. We leave as future work to explore the implications of our theoretical on this practical observation. BID5 simplified Neural GPUs and proved that, by considering piecewise linear activations and bidimensional input tensors instead of the original smooth activations and tridimensional tensors used by BID11, it is possible to achieve substantially better in terms of training time and generalization. Our Turing completeness proof also relies on a bidimensional tensor and uses piecewise linear activations, thus providing theoretical evidence that these simplifications actually retain the full expressiveness of Neural GPUs while simplifying its practical applicability. We have presented an analysis of the Turing completeness of two popular neural architectures for sequence-processing tasks; namely, the Transformer, based on attention, and the Neural GPU, based on recurrent convolutions. We plan to further refine this analysis in the future. For example, our proof of Turing completeness for the Transformer requires the presence of residual connections, i.e., the +x i, +a i, +y i, and +p i summands in Equations, while our proof for Neural GPUs heavily relies on the gating mechanism. We will study whether these features are actually essential to obtain completeness. We presented general abstract versions of both architectures in order to prove our theoretical . Although we closely follow their original definitions, some choices for functions and parameters in our positive are different to the usual choices in practice, most notably, the use of hard attention for the case of the Transformer, and the piecewise linear activation functions for both architectures. As we have mentioned, BID5 showed that for Neural GPUs piecewise linear activations actually help in practice, but for the case of the Transformer architecture more experimentation is needed to have a conclusive response. This is part of our future work. Although our are mostly of theoretical interest, they might lead to observations of practical interest. For example, BID1 have established the undecidability of several practical problems related to probabilistic language modeling with RNNs. This means that such problems can only be approached in practice via heuristics solutions. Many of the in BID1 are, in fact, a consequence of the Turing completeness of RNNs as established by. We plan to study to what extent our analogous undecidability for Transformers and Neural GPUs imply undecidability for language modeling problems based on these architectures. Finally, our rely on being able to compute internal representations of arbitrary precision. It would be interesting to perform a theoretical study of the main properties of both architectures in a setting in which only finite precision is allowed, as have been recently carried out for RNNs . We also plan to tackle this problem in our future work. We first sketch the main idea of Siegelmann & Sontag's proof. We refer the reader to the original paper for details. Siegelmann & Sontag show how to simulate a two-stack machine M (and subsequently, a Turing machine) with a single RNN N with σ as activation. They first construct a network N 1 that, with 0 as initial state (h N1 0 = 0) and with a binary string w ∈ {0, 1} * as input sequence, produces a representation of w as a rational number and stores it as one of its internal values. Their internal representation of strings encodes every w as a rational number between 0 and 1. In particular, they use base 4 such that, for example, a string w = 100110 is encoded as (0.311331) 4 that is, its encoding is DISPLAYFORM0 This representation allows one to easily simulate stack operations as affine transformations plus σ activations. For instance, if x w is the value representing string w = b 1 b 2 · · · b n seen as a stack, then the top(w) operation can be defined as simply y = σ(4x w − 2), since y = 1 if and only if b 1 = 1, and y = 0 if and only if b 1 = 0. Other stack operations can de similarly simulated. Using this representation, they construct a second network N 2 that simulates the two-stacks machine by using one neuron value to simulate each stack. The input w for the simulated machine M is assumed to be at an internal value given to N 2 as an initial state (h N2 0). Thus, N 2 expects only zeros as input. Actually, to make N 2 work for r steps, an input of the form 0 r should be provided. Finally, they combine N 1 and N 2 to construct a network N which expects an input of the following form: It is clear that Siegelmann & Sontag's proof resembles a modern encoder-decoder RNN architecture, where N 1 is the encoder and N 2 is the decoder, thus it is straightforward to use the same construction to provide an RNN encoder-decoder N and a language recognizer A that uses N and simulates the two-stacks machine M. There are some details that is important to notice. Assume that N is given by the formulas in Equations and. First, since N 2 in the above construction expects no input, we can safely assume that R in Equation FORMULA1 is the null matrix. Moreover, since A defines its own embedding function, we can ensure that every vector that we provide for the encoder part of N has a 1 in a fixed component, and thus we do not need the bias b 1 in Equation since it can be simulated with one row of matrix V. We can do a similar construction for the bias b 2 (Equation FORMULA1). Finally, Siegelmann & Sontag show that its construction can be modified such that a particular neuron of N 2, say n, is always 0 except for the first time an accepting state of M is reached, in which case n = 1. Thus, one can consider O(·) (Equation FORMULA1) as the identity function and add to A the stopping criterion that just checks if n is 1. We extend the definition of the function PropInv to sequences of vectors. Given a sequence X = (x 1, . . ., x n) we use vals(X) to denote the set of all vectors occurring in X. Similarly as for strings, we use prop(v, X) as the number of times that v occurs in X divided by the length of X. Now we are ready to extend PropInv with the following definition: DISPLAYFORM1 PropInv(X) = {X | vals(X) = vals(X) and prop(v, X) = prop(v, X) for all v ∈ vals(X)} Notice that for every embedding function f: Σ → Q d and string w ∈ Σ *, we have that if u ∈ PropInv(w) then f (u) ∈ PropInv(f (w)). Thus in order to prove that Trans(f (w), s, r) = Trans(f (u), s, r) for every u ∈ PropInv(w), it is enough to prove that Trans(X, s, r) = Trans(X, s, r) for every X ∈ PropInv(X)To further simplify the exposition of the proof we introduce another notation. We denote by p X v as the number of times that vector v occurs in X. Thus we have that X ∈ PropInv(X) if and only if, there exists a value γ ∈ Q + such that for every v ∈ vals(X) it holds that p DISPLAYFORM2 We now have all the necessary to proceed with the proof of Proposition 3.1. We will prove it by proving the property in. Let X = (x 1, . . ., x n) be an arbitrary sequence of vectors, and let X = (x 1, . . ., x m) ∈ PropInv(X). Moreover, let Z = (z 1, . . ., z n) = Enc(X; θ) and Z = (z 1, . . ., z m) = Enc(X ; θ). We first prove the following property:For every pair of indices (i, j) ∈ {1, . . ., n} × {1, . . ., m}, if DISPLAYFORM3 Lets (i, j) be a pair of indices such that x i = x j. From Equations we have that DISPLAYFORM4. By equations and the restriction over the form of normalization functions we have that DISPLAYFORM5 where α = n =1 f ρ (score(Q(x), K(x))). The above equation can be rewritten as DISPLAYFORM6. By a similar reasoning we can write DISPLAYFORM7 ). Now, since X ∈ PropInv(X) we know that vals(X) = vals(X) and there exists a γ ∈ Q + such that p X v = γp X v for every v ∈ vals(X). Finally, from this last property, plus the fact that x i = x j we have DISPLAYFORM8 Which completes the proof of Property FORMULA23 above. Consider now the complete encoder TEnc. Let (K, V) = TEnc(X) and (K, V) = TEnc(X), and let q be an arbitrary vector. We will prove now that Att(q, K, V) = Att(q, K, V). By following a similar reasoning as for proving Property (plus induction on the layers of TEnc) we obtain that if x i = x j then k i = k j and v i = v j, for every i ∈ {1, . . ., n} and j ∈ {1, . . ., m}. Thus, there exists a mapping DISPLAYFORM9 for every i ∈ {1, . . ., n} and j ∈ {1, . . ., m}. Lets focus now on Att(q, K, V). We have: DISPLAYFORM10. Similarly as before, we can rewrite this as DISPLAYFORM11 And finally using that X ∈ PropInv(X) we obtain DISPLAYFORM12 which is what we wanted. To complete the rest proof, consider Trans(X, y 0, r) which is defined by the recursion y k+1 = TDec(TEnc(X), (y 0, y 1, . . ., y k))To prove that Trans(X, y 0, r) = Trans(X, y 0, r) we use an inductive argument. We know that y 1 = TDec(TEnc(X), (y 0)) = TDec((K, V), (y 0)). Now TDec only access (K, V) via attentions of the form Att(q, K, V) and for the case of y 1 the vector q can only depend on y 0, thus, from Att(q, K, V) = Att(q, K, V) we have that DISPLAYFORM13 ). The rest of the steps follow by a simple induction on k. To obtain a contradiction, assume that there is a language recognizer A that uses a Transformer network and such that L = L(A). Now consider the strings w 1 = aabb and w 2 = aaabbb. Since w 1 ∈ PropInv(w 2) by Proposition 3.1 we have that w 1 ∈ L(A) if and only if w 2 ∈ L(A) which is a contradiction since w 1 ∈ L but w 2 / ∈ L. This completes the proof of the corollary. We construct a language recognizer A = (Σ, f, Trans, s, F) with Trans a very simple Transformer network with dimension d = 2 and using just one layer of encoder and one layer of decoder, such that L(A) = {w ∈ {a, b} * | w has strictly more symbols a than symbols b}. As embedding function, we use f (a) = and f (b) = [0, −1].Assume that the output for the encoder part of the transformer is X = (x 1, . . ., x n). First we use an encoder layer that implements the identity function. This can be trivially done using null functions for the self attention and through the residual connections this encoder layer shall preserve the original x i values. For the final V (·) and K(·) functions of the Transformer encoder (Equation For the decoder we use a similar approach. We consider the identity in the self attention plus the residual (which can be done by just using the null functions for the self attention). Considering the external attention, that is the attention over (K e, V e), we let score and ρ be arbitrary scoring and normalization functions. And finally for the function O(·) (Equation In order to complete the proof we introduce some notation. Lets denote by # a (w) as the number of a's in w, and similarly # b (w) for the number of b's in w. Lets call c w as the value #a(w)−# b (w) n. We now prove that, for any string w ∈ {a, b} * if we consider f (w) = X = (x 1, . . ., x n) as the input sequence for Trans and we use initial value s = for the decoder, the complete network shall compute a sequence y 1, y 2,..., y r such that: DISPLAYFORM0 We proceed by induction. The base case trivially holds since y 0 = s =. Assume now that we are at step r and the input for the decoder is (y 0, y 1, . . ., y r). We will show that y r+1 = [c w, 0]. Since we consider the identity in the self attention (Equation), we have that p i = y i for every i in {0, . . ., i}. Now considering the external attention, that is the attention over (K e, V e), Since all key vectors in K e are, the external attention will produce the same score value for all positions. That is, score(p i, k j1) = score(p i, k j2) for every j 1, j 2. Lets call this value s. Thus we have that DISPLAYFORM1 Then, since V e = X we have that DISPLAYFORM2 for every i ∈ {0, . . ., r}. The last equality holds since our embedding are f (a) = and f (b) = [0, −1], and so every a in w sums one and every b subtracts one. Thus, we have that DISPLAYFORM3 for every i ∈ {0, . . ., r}. In the next step, after the external attention plus the residual connection (Equation) we have DISPLAYFORM4 Finally, y r+1 = F (z r) = z r = [c w, 0] which is exactly what we wanted to prove. To complete the proof, notice that # a (w) > # b (w) if and only if c w > 0. If we define F as Q + × Q, the recognizer A = (Σ, f, Trans, s, F) will accept the string w exactly when c w > 0, that is, w ∈ L(A) if and only if # a (w) > # b (w). That is exactly the language S, and so the proof is complete. Let M = (Q, Σ, δ, q init, F) be a Turing machine with a infinite tape and assume that the special symbol # ∈ Σ is used to mark blank positions in the tape. We make the following assumptions about how M works when processing an input string:• M always moves its head either to the left or to the right (it never stays at the same cell).• M begins at state q init pointing to the cell immediately to the left of the input string.• M never makes a transition to the left of the initial position.• Q has a special state q read used to read the complete input.• Initially (time 0), M makes a transition to state q read and move its head to the right.• While in state q read it moves to the right until symbol # is read.• There are no transitions going out from accepting states (states in F).It is easy to prove that every general Turing machine is equivalent to one that satisfies the above assumptions. We prove that one can construct a transformer network Trans M that is able to simulate M on every possible input string. The construction is somehow involved and uses several helping values, sequences and intermediate . To make the reading more easy we divide the construction and proof in three parts. We first give a high-level view of the strategy we use. Then we give some details on the architecture of the encoder and decoder needed to implement our strategy, and finally we formally prove that every part of our architecture can be actually implemented. In the encoder part of Trans M we receive as input the string w = s 1 s 2... s n. We first use an embedding function to represent every s i as a one-hot vector and add a positional encoding for every index. The encoder produces output (K e, V e) where K e = (k In the decoder part of Trans M we simulate a complete execution of M over w = s 1 s 2 · · · s n . For this we define the following sequences (for i ≥ 0): DISPLAYFORM0: state of M at time i: head direction in the transition of M at time iFor the case of m (i) we assume that −1 represents a movement to the left and 1 represents a movement to the right. In our construction we show how to build a decoder that computes all the above values for every time step i using self attention plus attention over the encoder part. Since the above values contain all the needed information to reconstruct the complete history of the computation, we can effectively simulate M.In particular our construction produces the sequence of output vectors y 1, y 2,... such that, for every i, the vector y i contains information about q (i) and s (i) encoded as one-hot vectors. The construction and proof goes by induction. We begin with an initial vector y 0 that represents the state of the computation before it has started, that is q = q init and s = #. For the induction step we assume that we have already computed y 1,..., y r such that y i contains information about q (i) and s (i), and we show how with input (y 0, y 1, . . ., y r) the decoder produces the next vector y r+1 containing q (r+1) and s (r+1).The overview of the construction is as follows. First notice that the transition function δ relates the above values with the following equation: DISPLAYFORM1 We prove that we can use a two-layer feed-forward network to mimic the transition function δ (Lemma B.2). Thus, given that the input vector y i contains q (i) and s (i), we can produce the values q (i+1), v (i) and m (i) (and store them as values in the decoder). In particular, since y r is in the input, we can produce q (r+1) which is part of what we need for y r+1. In order to complete the construction we also need to compute the value s (r+1), that is, we need to compute the symbol under the head of machine M at the next time step (time r + 1). We next describe at a high level, how this symbol can be computed with two additional decoder layers. We first make some observations about s (i) that are fundamental in our computation. Assume that at time i the head of M is pointing to the cell at index k. Then we have three possibilities:1. If i ≤ n, then s (i) = s i since M is still reading its input string.2. If i > n and M has never written at index k, then s (i) = #, the blank symbol.3. In other case, that is, if i > n and time i is not the first time that M is pointing to index k, then s (i) is the last symbol written by M at index k. For the case we can produce s (i) by simply attending to position i in the encoder part. Thus, if r + 1 ≤ n to produce s (r+1) we can just attend to index r + 1 in the encoder and copy this value to y r+1. For cases and FORMULA1 the solution is a bit more complicated, but almost all the important work is to compute what is the index that M is going to be pointing to in time r + 1.To formalize this computation, lets denote by c (i) ∈ Z the following value: DISPLAYFORM2: the index of the cell to which the head of M is pointing to at time i DISPLAYFORM3. If we unroll this equation and assuming that c = 0 we obtain that DISPLAYFORM4 Then, at the step i in the decoder we have all the necessary to compute value c (i) but also the necessary to compute c (i+1). We actually show that the computation (of a representation) of c (i) and c (i+1) can be done by using one layer of self attention (Lemma B.3).We still need to define a final notion. With c (i) one can define the helping value (i) as follows: DISPLAYFORM5 Thus, (i) is a value such that c ((i)) = c (i), which means that at time i and at time (i) the head of M was pointing to the same cell. Moreover, (i) is the maximum value less than i that satisfies such condition. That is (i) is the last time (previous to i) in which M was pointing to position c (i). First notice that in every step, M moves its head either to the right or to the left (it never stays in the same cell). This implies that for every i it holds that c (i) = c (i−1), from which we obtain that (i) < i − 1. Moreover, in the case that c (i) is visited for the first time at time step i, the value (i) is ill-defined. In such a case we let (i) = i − 1. This makes (i) ≤ i − 1 for all i, and allows us to check that c (i) is visited for the first time at time step i by just checking that (i) = i − 1.We now have all the necessary to explain how we compute our desired s (r+1) value. Assume that r + 1 > n (the case r + 1 ≤ n was already covered before). We first note that if (r + 1) = r then s (r+1) = # since this is the first time that cell c (r+1) is visited. On the other hand, if (r + 1) < r then s (r+1) is the value written by M at time (r + 1) which is exactly v ((r+1)). Thus, in this case we only need to attend to position (r + 1) and copy the value v ((r+1)) to produce s (r+1). We show that all this can be done with an additional self-attention decoder layer (Lemma B.4).We have described at a high-level a decoder that, with input (y 0, y 1, . . ., y r), computes the values q (r+1) and s (r+1) which is what we need to produce y r+1. We next show all the details of this construction. In this section we give more details on the architecture of the encoder and decoder needed to implement our strategy. We let several intermediate claims as lemmas that we formally prove in Section B.4.3. For our attention mechanism we use the following non-linear function: DISPLAYFORM0 We note that ϕ(x) = −|x| and it can be implemented as ϕ(x) = − relu(x) − relu(−x). We use ϕ(·) to define a scoring function score ϕ: DISPLAYFORM1 Now, let q ∈ Q d, and K = (k 1, . . ., k n) and V = (v 1, . . ., v n) be tuples of elements in Q d. We now describe how Att(q, K, V) is generally computed when hard attention is considered. Assume first that there exists a single j ∈ {1, . . ., n} that maximizes score ϕ (q, k j). In that case we have that Att(q, K, V) = v j with DISPLAYFORM2 Thus, when computing hard attention with the function score ϕ (·) we essentially select the vector v j such that the dot product q, k j is as close to 0 as possible. If there is more than one index, say indexes j 1, j 2,..., j r, that minimizes the dot product q, k j then we have that DISPLAYFORM3 Thus, in the extreme case in which all dot products are equal q, k j for every index j, attention behaves just as an average of all value vectors, that is Att(q, K, V) = 1 n n j=1 v j. We use all these properties of the hard attention in our proof. We now describe the vectors that we use in the encoder and decoder parts of Trans M. The vectors that we use in the Trans M layers are of dimension d = 2|Q|+4|Σ|+11. To simplify the exposition, whenever we use a vector v ∈ Q d, we write it arranged in four groups of values as follows DISPLAYFORM0 where q i ∈ Q |Q|, s i ∈ Q |Σ|, and x i ∈ Q. Whenever in a vector of the above form any of the four groups of values is composed only of 0's, we just write'0,..., 0' where the length of this sequence is implicit in the length of the corresponding group. Finally, we denote by 0 q the vector in Q |Q| that has only 0's, and similarly 0 s the vector in Q |Σ| that has only 0's. For a symbol s ∈ Σ, we use s to denote a one-hot vector in Q |Σ| that represents s. That is, given an enumeration π: Σ → {1, . . ., |Σ|}, the vector s has a 1 in position π(s) and a 0 in all other positions. Similarly, for q ∈ Q, we use q to denote a one-hot vector in Q |Q| that represents q. We have the necessary to introduce the embedding and positional encoding used in our construction. We use an embedding function f: Σ → Q d defined as DISPLAYFORM0 Our construction uses the positional encoding pos: N → Q d such that DISPLAYFORM1 Thus, given an input sequence s 1 s 2 · · · s n ∈ Σ *, we have that DISPLAYFORM2 We denote this last vector by x i. That is, if M receives the input string w = s 1 s 2 · · · s n, then the input for Trans M is the sequence (x 1, x 2, . . ., x n). The need for using a positional encoding having values 1/i and 1/i 2 will be clear when we formally prove the correctness of our construction. We need a final preliminary notion. In the formal construction of Trans M we also use the following helping sequences: DISPLAYFORM3 These are used to identify when M is still reading the input string. The encoder part of Trans M is very simple. For TEnc M we use a single-layer encoder, such that DISPLAYFORM0 It is straightforward to see that these vectors can be produced with a single encoder layer by using a trivial self attention, taking advantage of the residual connections in Equations and, and then using linear transformations for V (·) and K(·) in Equation.When constructing the decoder we use the following property. Lemma B.1. Let q ∈ Q d be a vector such that q = [, . . .,, 1, j,,] where j ∈ N and'' denotes an arbitrary value. Then we have that DISPLAYFORM1 We next show how to construct the decoder part of Trans M to produce the sequence of outputs y 1, y 2,..., where y i is given by: DISPLAYFORM2 That is, y i contains information about the state of M at time i, the symbol under the head of M at time i, and the last direction followed by M (the direction of the head movement at time i − 1). The need to include m (i−1) will be clear in the construction. We consider as the starting vector for the decoder the vector DISPLAYFORM3 We are assuming that m (−1) = 0 to represent that previous to time 0 there was no head movement. Our construction resembles a proof by induction; we describe the architecture piece by piece and at the same time we show how for every r ≥ 0 our architecture constructs y r+1 from the previous vectors (y 0, . . ., y r).Thus, assume that y 0,..., y r satisfy the properties stated above. Since we are using positional encodings, the actual input for the first layer of the decoder is the sequence y 0 + pos, y 1 + pos,..., y r + pos(r + 1).We denote by y i the vector y i plus its positional encoding. Thus we have that DISPLAYFORM4 For the first self attention in Equation we just produce the identity which can be easily implemented with a trivial attention plus the residual connection. Thus, we produce the sequence of vectors (p DISPLAYFORM5 by Lemma B.1 we know that if we use p 1 i to attend over the encoder we obtain Att(p DISPLAYFORM6 Thus in Equation we finally produce the vector a DISPLAYFORM7 As the final piece of the first decoder layer we use a function O 1 (·) (Equation ) that satisfies the following lemma. Lemma B.2. There exists a two-layer feed-forward network DISPLAYFORM8 besides some other linear transformations. We finally produce as the output of the first decoder layer, the sequence (z DISPLAYFORM9 Notice that z 1 r already holds info about q (r+1) and m (r) which we need for constructing vector y r+1. The single piece of information that we still need to construct is s (r+1), that is, the symbol under the head of machine M at the next time step (time r + 1). We next describe how this symbol can be computed with two additional decoder layers. Recall that c (i) is the cell to which M is pointing to at time i, and that it satisfies that c DISPLAYFORM10. We can take advantage of this property to prove the following lemma. DISPLAYFORM11, and V 2 (·) defined by feed-forward networks such that DISPLAYFORM12 Lemma B.3 essentially shows that one can construct a representation for values c (i) and c (i+1) for every possible index i. In particular we will know the value c (r+1) that represents the cell to which the machine is pointing to in the next time step. Continuing with the decoder layer, when using the self attention above and after adding the residual in Equation we obtain the sequence of vectors (p DISPLAYFORM13 DISPLAYFORM14 We now describe how can we use a third and final decoder layer to produce our desired s (r+1) value (the symbol under the head of M in the next time step). Recall that (i) is the last time (previous to i) in which M was pointing to position c (i), or it is i − 1 if this is the first time that M is pointing to c (i). We can prove the following lemma. Lemma B.4. There exists functions Q 3 (·), K 3 (·), and V 3 (·) defined by feed-forward networks such that DISPLAYFORM15 We prove Lemma B.4 by just showing that, for every i one can attend exactly to position (i+1) and then just copy both values. We do this by taking advantage of the values c (i) and c (i+1) previously computed for every index i. Then we have that p Proof of Lemma B.1. Let q ∈ Q d be a vector such that q = [, . . .,, 1, j,,] where j ∈ N and'' is an arbitrary value. We next prove that DISPLAYFORM0 where α (j) and β (j) are defined as DISPLAYFORM1 Then we have that DISPLAYFORM2 Notice that, if j ≤ n, then the above expression is maximized when i = j. Otherwise, if j > n then the expression is maximized when i = n. Then Att(q, K e, V e) = v i where i = j if j ≤ n and i = n if j > n. We note that i as just defined is exactly β (j). Thus, given that v i is defined as DISPLAYFORM3 we obtain that DISPLAYFORM4 which is what we wanted to prove. Proof of Lemma B.2. In order to prove the lemma we need some intermediate notions and properties. Assume that the enumeration π 1: Σ → {1, . . ., |Σ|} is the one used to construct the one-hot vectors s for s ∈ Σ, and that π 2: Q → {1, . . ., |Q|} is the one used to construct q with q ∈ Q. Using π 1 and π 2 one can construct an enumeration for the pairs in Q × Σ and then construct one-hot vectors for pairs in this set. Formally, given (q, s) ∈ Q × Σ we denote by (q, s) a one-hot vector with a 1 in position (π 1 (s) − 1)|Q| + π 2 (q) and a 0 in every other position. To simplify the notation we use π(q, s) to denote (π 1 (s) − 1)|Q| + π 2 (q). One can similarly construct an enumeration π for Q × Σ × {−1, 1} such that π (q, s, m) = π(q, s) if m = −1 and π (q, s, m) = |Q||Σ| + π(q, s) if m = 1. We denote by (q, s, m) the corresponding one-hot vector for every (q, s, m) ∈ Q × Σ × {−1, 1}. We next prove three helping properties. In every case q ∈ Q, s ∈ Σ, m ∈ {−1, 1}, and δ(·, ·) is the transition function of machine M.1. There exists f 1: DISPLAYFORM5 2. There exists f δ: DISPLAYFORM6 3. There exists f 2: DISPLAYFORM7 Published as a conference paper at ICLR 2019To show, lets denote by S i, with i ∈ {1, . . ., |Σ|}, a matrix of dimensions |Σ| × |Q| such that S i has its i-th row with 1's and it is 0 everywhere else. We note that for every s ∈ Σ it holds that s S i = 1 if and only if i = π 1 (s) and it is 0 otherwise. Now, consider the vector v (q,s) DISPLAYFORM8 We first note that for every i ∈ {1, . . ., |Σ|}, if i = π 1 (s) then q + s S i = q + 0 = q. Moreover q + s S π1(s) = q + 1 is a vector that has a 2 exactly at index π 2 (q), and it is 1 in all other positions. Thus, the vector v (q,s) has a 2 exactly at position (π 1 (s) − 1)|Q| + π 2 (q) and it is either 0 or 1 in every other position. Now, lets denote by o a vector in Q |Q||Σ| that has a 1 in every position and consider the following affine transformation DISPLAYFORM9 Vector g 1 ([ q, s]) has a 1 only at position (π 1 (s) − 1)|Q| + π 2 (q) = π(q, s) and it is less than or equal to 0 in every other position. Thus, to construct f 1 (·) we apply the piecewise-linear sigmoidal activation σ(·) (see Equation ) to obtain DISPLAYFORM10 which is what we wanted. Now, to show, lets denote by M δ a matrix of dimensions (|Q||Σ|) × (2|Q||Σ|) constructed as follows. For (q, s) ∈ Q × Σ, if δ(q, s) = (p, r, m) then M δ has a 1 at position (π(q, s), π (p, r, m)) and it has a 0 in every other position, that is DISPLAYFORM11 It is straightforward to see that (q, s) M δ = δ(q, s), and thus we can define f 2 (·) as DISPLAYFORM12 To show, consider the matrix A of dimensions (2|Q||Σ|) × (|Q| + |Σ| + 1) such that DISPLAYFORM13 Then we define f 3 (·) as DISPLAYFORM14 We are now ready to begin with the proof of the lemma. Recall that a 1 i is given by a DISPLAYFORM15 We need to construct a function O 1: DISPLAYFORM16 We first use function h 1 (·) that works as follows. Lets denote bym (i−1) the value DISPLAYFORM17 where g 1 (·) is the function defined above in Equation FORMULA79. It is clear that h 1 (·) is an affine transformation. Moreover, we note that except for g 1 ([ DISPLAYFORM18) are between 0 and 1. Thus if we apply function σ(·) to h 1 (a DISPLAYFORM19 is the vector with only zeros, then score ϕ (Q 2 (z 1 i), K 2 (z 1 j)) = 0 for every j ∈ {0, . . ., i}. Thus, we have that the attention Att(Q 2 (z DISPLAYFORM20) that we need to compute is just the average of all the vectors in V 2 (Z DISPLAYFORM21 which is exactly what we wanted to show. Proof of Lemma B.4. Recall that z 2 i is the following vector z DISPLAYFORM22 We need to construct functions Q 3 (·), K 3 (·), and V 3 (·) such that DISPLAYFORM23 We first define the query function Q 3: DISPLAYFORM24 Now, for every j ∈ {0, 1, . . ., i} we define DISPLAYFORM25 ] It is clear that the three functions are linear transformations and thus they can be defined by feedforward networks. Consider now the attention Att(Q 3 (z DISPLAYFORM26 . In order to compute this value, and since we are considering hard attention, we need to find the value j ∈ {0, 1, . . ., i} that maximizes DISPLAYFORM27 ). Actually, assumming that such value is unique, lets say j, then we have that DISPLAYFORM28 We next show that given our definitions above, it always holds that j = (i + 1) and then V 3 (z 2 j) is exactly the vector that we wanted to obtain. To simplify the notation, we denote by χ Now, by our definition of Q 3 (·) and K 3 (·) we have that DISPLAYFORM29 where ε k = 1 (k+1). We next prove the following auxiliary property. If j 1 is such that c (j1) = c (i+1) and j 2 is such that DISPLAYFORM30 In order to prove, assume first that j 1 ∈ {0, . . ., i} is such that c (j1) = c (i+1). Then we have that |c (i+1) − c (j1) | ≥ 1 since c (i+1) and c (j1) are integer values. From this we have two possibilities for χ i j1: DISPLAYFORM31 Notice that 1 ≥ ε j1 ≥ ε i > 0. Then we have that ε i ε j1 ≥ (ε i ε j1) 2 > 1 3 (ε i ε j1) 2, and thus DISPLAYFORM32 Finally, and using again that 1 ≥ ε j1 ≥ ε i > 0, from the above equation we obtain that DISPLAYFORM33 Thus, we have that if DISPLAYFORM34 Now assume j 2 ∈ {0, . . ., i} is such that c (j2) = c (i+1). In this case we have that DISPLAYFORM35 We showed that if c DISPLAYFORM36 This completes the proof of the property in. We have now all the necessary to prove that arg min j |χ i j | = (i + 1). Recall first that (i + 1) is defined as DISPLAYFORM37 in other case. Assume first that there exists j ≤ i such that c (j) = c (i+1). By we know that arg min j∈{0,...,i} DISPLAYFORM38 On the contrary, assume that for every j ≤ i it holds that c (j) = c (i+1). We will prove that in this case |χ for every j ≤ i, then c (i+1) is a cell that has never been visited before by M. Given that M never makes a transition to the left of its initial cell, then cell c (i+1) is a cell to the right of every other previously visited cell. This implies that c (i+1) > c (j) for every j ≤ i. Thus, for every j ≤ i we have c DISPLAYFORM39 Moreover, notice that if j < i then ε j > ε i and thus, if j < i we have that DISPLAYFORM40 The formulas of the Neural GPU in detail are as follows (with S 0 the initial input tensor): DISPLAYFORM41 With U (·), R(·), and F (·) defined as DISPLAYFORM42 Consider now an RNN encoder-decoder N of dimension d and composed of the equations DISPLAYFORM43 with h 0 = 0 and g 0 = h n where n is the length of the input. We construct a Neural GPU network NGPU that simulates N as follows. Assume that the input of N is X = (x 1, . . ., x n). Then we first construct the sequence X = (x 1, . . ., x n) such that x i = [x i, 0, 0, 1, 1, 0] with 0 ∈ Q d the vector with all values as 0. Notice that x i ∈ Q 3d+3, moreover it is straightforward that if x i was constructed from an embedding function f: Σ → Q d applied to a symbol a ∈ Σ, then x i can also be constructed with an embedding function f: DISPLAYFORM0 We consider an input tensor S ∈ Q n×1×3d+3 such that for every i ∈ {1, . . ., n} it holds that S i,1,: = x i = [x i, 0, 0, 1, 1, 0]. Notice that since we picked w = 1, our tensor S is actually a 2D grid. Our proof shows that a bi-dimensional tensor is enough for simulating an RNN.We now describe how to construct the kernel banks K U, K R and K F of shape (2, 1, 3d + 3, 3d + 3).Notice that for each kernel K X we essentially have to define two matrices K X 1,1,:,: and K X 2,1,:,: each one of dimension (3d + 3) × (3d + 3). We begin by defining every matrix in K F as block matrices. When defining the matrices, all blank spaces are considered to be 0. DISPLAYFORM1 where F 1 and F 2 are 3 × 3 matrices defined by Before continuing with the proof we note that for every kernel K X and tensor S we have that We now prove that the following properties hold for every t ≥ 0: DISPLAYFORM0 where α k j is given by the recurrence α DISPLAYFORM1 That is, we are going to prove that our construction actually simulates N. By one can see that the intuition in our construction is to use the first d components to simulate the encoder part, the next d components to communicate data between the encoder and decoder simulation, and the next d components to simulate the decoder part. The last three components are needed as gadgets for the gates to actually simulate a sequencial read of the input, and to ensure that the hidden state of the encoder and decoder are updated properly. We prove the above statement by induction in t. First notice that the property trivially holds for S 0.Now assume that this holds for t − 1 and lets prove it for t. We know that U t is computed as DISPLAYFORM2 Thus we have that: DISPLAYFORM3 By the induction hypothesis we have DISPLAYFORM4 Now, notice that K for i = t − 1 σ FIG2 for i = t σ FIG2 for i > t We are almost done with the inductive step, we only need to compute σ((K F * (R t S t−1)) i,1,:).Given what we have for R t and S t−1 we have that R and then, T i,:,: = T i+p,:,:.Consider now an arbitrary uniform Neural GPU that processes tensor S above, and assume that S 1, S 2,..., S r is the sequence produced by it. Next we prove that for every t and for every i it holds that S t i,:,: = S t i+p,:,:. We prove it by induction in t. For the case S 0 it holds by definition. Thus assume that S t−1 satisfies the property. Let DISPLAYFORM5 Since we are considering uniform Neural GPUs, we know that there exist three matrices B U, B This completes the first part of the proof. We have shown that if the input of a uniform neural GPU is periodic, then the output is also periodic. We make a final observation. Let N be a uniform Neural GPU, and S ∈ Q kp×w×d be a tensor such that S i,:,: = S i+p,:,: for every i. Moreover, let T ∈ Q k p×w×d be a tensor such that T i,:,: = T i+p,:,:for every i, and assume that S 1:p,:,: = T 1:p,:,:. Lets S 1, S 2,... and T 1, T 2,... be the sequences produced by N. Then with a similar argument as above it is easy to prove that for every t it holds that S t 1:p,:,: = T t 1:p,:,:. From this it is easy to prove that uniform Neural GPUs will no be able to recognize the length of periodic inputs. Thus assume that there is a language recognizer A defined by of a uniform neural GPU N such that L(A) contains all strings of even length. Assume that u is an arbitrary string in Σ such that |u| = p with p an odd number, and let w = uu and w = uuu. Notice that |w| = 2p and thus w ∈ L(A), but |w | = 3p and thus w ∈ L(A).Let f: Σ → Q d and let X = f (w) = (x 1, x 2, . . ., x 2p) and X = f (w) = (x 1, x 2, . . ., x 3p).Consider now the tensor S ∈ Q 2p×w×d such that S i,1,: = x i for i ∈ {1, . . ., 2p}, thus S i,:,: = S i+p,:,:. Similarly, consider T ∈ Q 3p×w×d such that such that T i,1,: = x i for i ∈ {1, . . ., 3p}, and thus T i,:,: = T i+p,:,:. Notice that S 1:p,:,: = T 1:p,:,: then by the property above we have that for every t it holds that S. From this we conclude that the outputs of N for both inputs X and X are the same, and thus if A accepts w then A accepts w which is a contradiction.
We show that the Transformer architecture and the Neural GPU are Turing complete.
973
scitldr
It is usually hard for a learning system to predict correctly on the rare events, and there is no exception for segmentation algorithms. Therefore, we hope to build an alarm system to set off alarms when the segmentation is possibly unsatisfactory. One plausible solution is to project the segmentation into a low dimensional feature space, and then learn classifiers/regressors in the feature space to predict the qualities of segmentation . In this paper, we form the feature space using shape feature which is a strong prior information shared among different data, so it is capable to predict the qualities of segmentation given different segmentation algorithms on different datasets. The shape feature of a segmentation is captured using the value of loss function when the segmentation is tested using a Variational Auto-Encoder(VAE). The VAE is trained using only the ground truth masks, therefore the bad segmentation with bad shapes become the rare events for VAE and will in large loss value. By utilizing this fact, the VAE is able to detect all kinds of shapes that are out of the distribution of normal shapes in ground truth (GT). Finally, we learn the representation in the one-dimensional feature space to predict the qualities of segmentation . We evaluate our alarm system on several recent segmentation algorithms for the medical segmentation task. The segmentation algorithms perform differently on different datasets, but our system consistently provides reliable prediction on the qualities of segmentation . A segmentation algorithm usually fails on the rare events, and it is hard to fully avoid such issue. The rare events may occur due to the limited number of training data. To handle it, the most intuitive way is to increase the number of training data. However, the labelled data is usually hard to collect, e.g., to fully annotate a 3D medical CT scan requires professional radiology knowledge and several hours of work. In addition, the human labelling is unable to cover all possible cases. Previously, various methods have been proposed to make better use of training data, like sampling strategies paying more attention to the rare events BID17. But still it may fail on the rare events which never occur in the training data. Another direction is to increase the robustness of the segmentation algorithm to the rare events. BID6 proposed the Bayesian neural network which can model the uncertainty as an additional loss to make the algorithm more robust to noisy data. These kinds of methods make the algorithm insensitive to certain types of perturbations, but the algorithms may still fail on other perturbations. Since it is hard to completely prevent the segmentation algorithm from failure, we consider to detect the failure instead: build up an alarm system cooperating with the segmentation algorithm, which will set off alarms when the system finds that the segmentation is not good enough. This task is also called as quality assessment. Several works have been proposed in this field. BID5 applied Bayesian neural network to capture the uncertainty of the segmentation and set off alarm based on that uncertainty. However, this system also suffers from rare events since the segmentation algorithms often make mistakes confidently on some rare events . BID8 provided an effective way by projecting the segmentation into a feature space and learn from this low dimension space. They manually design several heuristic features, e.g., size, intensity, and assume such features would indicate the quality of the segmentation . After projecting the segmentation into the feature space, they learned a classifier to CT image GT Recon-Prediction Recon-GT Prediction Entropy Alea-Uncertainty Epis-Uncertainty Figure 1: The visualize on an NIH CT dataset for pancreas segmentation. The figures ReconPrediction and Recon-GT are reconstruction from prediction and GT by VAE network respectively. The Dice score between the GT and prediction is 47.06 while the Dice score between the prediction and Recon-Prediction is 47.25. In our method, we use the later Dice score to predict the former real Dice score which is usually unknown at inference phase in real applications. This case shows how these two Dice scores are related to each other. On the other hand, for uncertainty based methods, different kinds of uncertainty distribute mainly on the boundary of predicted mask, which makes it a vague information when detecting the failure cases.predict its quality. Since the feature space is of relative low dimension now, it is able to distinguish good segmentation from bad ones directly. In a reasonable feature space, when the segmentation algorithm fails, the failure output will be far from the ground truth. So the main problems is what these good features are and how to capture them. Many features that BID8 selected are actually less related with the quality of segmentation , e.g., size. In our system, we choose a more representative feature, i.e., the shape feature. The shape feature is important because the segmenting objects (foreground in the volumetric mask) often have stable shapes among different cases, especially in 3D. So the shape feature is supposed to provide a strong prior information for judging the quality of a segmentation , i.e., bad segmentation tend to have bad shapes and vice versa. Furthermore, to model the prior from the segmentation mask space is much easier than in the image space and the shape prior can be shared among different datasets while the features like image intensity are affected by many factors. That means the shape feature can deal with not only rare events but also different data distributions in the image space, which shows great generalization power and potential in transfer learning. We propose to use the Variational Auto-Encoder(VAE) BID7 to capture the shape feature. The VAE is trained on the ground truth masks, and afterwards we define the value of the loss function as the shape feature of a segmentation when it is tested with VAE network. Intuitively speaking, after the VAE is trained, the bad segmentation with bad shapes are just rare events to VAE because it is trained using only the ground truth masks, which are under the distribution of normal shapes. Thus they will have larger loss value. In this sense we are utilizing the fact that the learning algorithms will perform badly on the rare events. Formally speaking, the loss function, known as the variational lower bound, is optimized to approximate the function log P (Y) during the training process. So after the training, the value of the loss function given a segmentation Ŷ is close to log P (Ŷ), thus being a good definition for the shape feature. In this paper, we proposed a VAE-based alarm system for segmentation algorithms. The qualities of the segmentation can be well predicted using our system. To validate the effectiveness of our alarm system, we test it on multiple segmentation algorithms. These segmentation algorithms are trained on one dataset and tested on several other datasets to simulate when the rare events occur. The performance for the segmentation algorithms on the other datasets (rather than the training dataset) drops quickly but our system can still predict the qualities accurately. We compare our system with other alarm systems on the above tasks and our system outperforms them by a large margin, which shows the importance of shape feature in alarm system and the great power of VAE in capturing the shape feature.2 RELATED WORK BID6 employed Bayesian neural network (BNN) to model the aleatoric and epistemic uncertainty. Afterwards, BID9 applied the BNN to calculate the aleatoric and epistemic uncertainty on medical segmentation tasks. BID5 utilized the BNN and model another kind of uncertainty based on the entropy of segmentation . They calculated a doubt score by summing over weighted pixel-vise uncertainty. However we can see from Figure 1 that when the segmentation algorithm fails to provide correct prediction, the uncertainty still distributes mainly on the boundary of the wrong segmentation , which means the algorithm is strongly confident on where it makes mistakes. Other method like Valindria et al. FORMULA0 used registration based method for quality assessment. It is a reliable method because it takes the prior of image by setting up a reference dataset. The problem of this method is inefficient testing. Every single case needs to do registration with all reference data to determine the quality but registration on 3D image is usually slow. Also the registration based method can hardly be transferred between datasets or modalities. Chabrier et al. FORMULA1 and BID3 use unsupervised method to estimate the segmentation quality using geometrical and other features. However the application in medical settings is not clear. Also BID14 tried a simple method using image-segmentation pair to directly regress the quality. Kohlberger et al. FORMULA0 introduced a feature space of shape and appearance to characterize a segmentation. The shape features in their system contain volume size, surface area, which are not necessarily related with the quality of the segmentation . In our work we choose to learn a statistical prior of the segmentation mask and then determine the quality by how well a mask fits the prior. This is related with Out-of-Distribution (OOD) detection. Previous works in this field BID4 ) BID10 ) made use of the softmax output in the last layer of a classifier to calculate the out-of-distribution level. In our case, however, for a segmentation method, we can only get a voxel-wise out-of-distribution level using these methods. How to calculate the out-of-distribution level for the whole mask becomes another problem. In addition, the segmentation algorithm can usually predict most of voxels correctly with a high confidence, making the out-of-distribution level on those voxels less representative. Auto-Encoder(AE), as a way of learning representation of data automatically, has been widely used in many areas such as anomaly detection BID21, dimension reduction, etc. Variational autoencoder(VAE) BID7, compared with AE, can better learn the representation for the latent space. We employ VAE to learn the shape representation from the volumetric mask. Unlike method of BID18 which needs to pre-train with RBM, VAE can be trained following an end-to-end fashion. BID13 learned the shape representation from point cloud form, while we choose the volumetric form as a more natural way to corporate with segmentation task. utilizes AE to evaluate difference between prediction and ground truth but not in an unsupervised way. We first define our task formally. Denote the datasets we have as (X, Y), where Y is the label set of X. We divide (X, Y) into training set (X t, Y t) and validation set (X v, Y v). Suppose we have a segmentation algorithms F trained on X t. Usually we validate the performance of F on X v using Y v. Now we are doing this task without Y v. Formally, we try to find a function L such that DISPLAYFORM0 where L is a function used to calculate the similarity of the segmentation F (X) respect to the ground truth Y, i.e., the quality of F (X). How to design L to take valuable information from F and X, is the main question. Recall that the failure may happen when X is a rare event. But to detect whether a image X is within the distribution of training data is very hard because of the complex structure of image space, and actually that is what F is trained to learn. In uncertainty based method BID5 and BID9, the properties of F is encoded by sampling the parameters of F and calculating the uncertainty of output. The uncertainty does help predict the quality but the performance strongly relies on F. It requires F to have Bayesian structure, which is not in our assumption and for a well-trained F. The uncertainty will mainly distribute on the boundary of segmentation prediction. So we change the formulation above to DISPLAYFORM1 By adding this constrain, we still take the information from F and X, but not in direct way. The most intuitive idea to do is directly applying a regression algorithm on the segmentation to predict the quality. But the main problem is that the regression parameters trained with a certain segmentation algorithm F highly relate with the distribution of F (X), which varies from different F.Following the idea of BID8, we apply a two-step method, where the first one is to encode the segmentation F (X) into the feature space, and the second one is to learn from the feature space to predict the quality of F (X). We propose a novel way of capturing the shape feature from F (X), denoting as S(F (X); θ). Finally it changes to DISPLAYFORM2 The shape feature is captured from Variational Autoencoder (VAE) trained with the ground masks Y ∈ Y t. Here we define the shape of the segmentation masks as the distribution of the masks in volumetric form. We assume the normal label Y obeys a certain distribution P (Y). For a predictive maskŷ, its quality should be related with P (Y =ŷ). Our goal is to estimate the function P (Y).Recall the theory of VAE, we hope to find an estimation function Q(z) minimizing the difference between Q(z) and P (z|Y), where z is the variable of the latent space we want encoding Y into, i.e. optimizing DISPLAYFORM0 KL is Kullback-Leibler divergence. By replacing Q(z) with Q(z|Y), finally it would be deduced to the core equation of VAE BID2. DISPLAYFORM1 where P (z) is the prior distribution we choose for z, usually Gaussian, and Q(z|Y), P (Y |z) correspond to encoder and decoder respectively. Once Y is given, log P (Y) is a constant. So by optimizing the RHS known as variational lower bound of log P (Y), we optimize for KL[Q(z)||P (z|Y)].Here however we are interested in P (Y). By exchanging the second term in LHS with all terms in RHS in equation FORMULA4, we rewrite the training process as minimizing DISPLAYFORM2 We denote DISPLAYFORM3 as S(Y ; θ) for brevity. It shows that the training process is actually learning a function to best fit log P (Y) over the distribution of Y. After training VAE, S(Y ; θ) becomes a natural approximation for log P (Y). So we just choose S(Y ; θ) as our shape feature. In this method we use Dice Loss BID11 when training VAE, which is widely used in medical segmentation task. The final form of S is DISPLAYFORM4 where encoder µ, Σ and decoder g are controlled by θ, and λ is a coefficient to balance the two terms. The first term is the Dice's coefficient between Y and g(z), ranging from 0 to 1 and equal to 1 if Y and g(z) are equal. If we look at the shape feature closely, it indicates that after VAE is trained using data with only normal shape, the predictive maskŷ tends to be more likely in the distribution of normal shape if it can achieve less reconstruction error and is closer to prior distribution in the latent space, since log P (ŷ) ≥ S(ŷ; θ) holds all the time. On the other hand, for cases with high P (ŷ) but low S(ŷ; θ), it would introduce a large penalty to the object function, and is less likely to occur for a well-trained VAE. We assume that the shape feature is good enough to obtain reliable quality assessment. Intuitively thinking, for a segmentation F (X), the higher log P (F (X)) is, the better shape F (X) is in and thus the higher L(F (X), Y ) is. Formally, taking the shape feature in section 3.1, we can predict the quality by fitting a function L such that DISPLAYFORM0 Here the parameter θ is learned by training the VAE, using labels in the training data Y t, and is then fixed during the step two. We choose L to be a simple linear model, so the energy function we want to optimize is DISPLAYFORM1 We only use linear regression model because the experiments show strong linear correlation between the shape features and the qualities of segmentation . L is the Dice's coefficient, i.e. DISPLAYFORM2 In step one, the VAE is trained only using labels in training data. Then in step two θ is fixed. To learn a, b, the standard way is to optimize the energy function in 3.2 using the segmentation on the training data, i.e. arg min DISPLAYFORM0 Here the segmentation algorithm F we use to learn a, b is called the preparation algorithm. If F is trained on X t, the quality of F (X) would be always high, thus providing less information to regress a, b. To overcome this, we use jackknifing training strategy for F on X t. We first divide X t into X 1 t and X 2 t. Then we train two versions of F on X t \ X 1 t and X t \ X 2 t respectively, say F 1 and F 2. The optimizing function is then changed to arg min DISPLAYFORM1 In this way we solve the problem above by simulating the performance of F on the testing set. The most accurate way is to do leave-one-out training for F, but the time consumption is not acceptable, Table 1: Comparison between our method and baseline methods. The BNN is trained on NIH and tested on all three other datasets. Then, the segmentation are evaluated by 4 methods automatically without using ground truth. Of the 4 methods, ours achieves the highest accuracy and the highest correlation between predicted Dice score and real Dice score.and two-fold split is effective enough according to experiments. When the training is done, we can test on any segmentation algorithm F and data X to predict the quality Q = aS(F (X); θ) + b. In this section we test our alarm system on several recent algorithms for automatic pancreas segmentation that are trained on a public medical dataset. Our system obtains reliable predictions for the qualities of segmentation . Furthermore the alarm system remains effective when the segmentation algorithms are tested on other datasets. We show better quality assessment capability and transferability compared with uncertainty-based methods and direct regression method. The quality assessment are evaluated using mean of absolute error (MAE), stand deviation of residual error (STD), pearson correlation (P.C.) and spearman correlation (S.C.) between the real quality (Dice's coefficient) and predictive quality. We adopt three public medical datasets and four recent segmentation algorithms in total. All datasets consist of 3D abdominal CT images in portal venous phase with pancreas region fully annotated. The CT scans have resolutions of 512 × 512 × h voxels with varying voxel sizes.• NIH Pancreas-CT Dataset (NIH). The NIH Clinical Center performed 82 abdominal 3D CT scans BID15 from 53 male and 27 female subjects. The subjects are selected by radiologist from patients without major abdominal pathologies or pancreatic cancer lesions.• Medical Segmentation Decathlon (MSD). Table 2: Different algorithms tested on different datasets are evaluated by our alarm system. Without tuning parameters, the system can be directly applied to evaluate other segmentation algorithms• Multi-atlas Labeling Challenge (MLC). The multi-atlas labeling challenge provides 50 (30 Training +20 Testing) abdomen CT scans randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial and a retrospective ventral hernia study 2.The testing data for the last two datasets is not used in our experiment since we have no annotations for these cases. The segmentation algorithms we choose are V-Net BID11, 3D Coarse2Fine BID20, Deeplabv3, and 3D Coarse2Fine with Bayesian structure BID9. The first two algorithms are based on 3D networks while the Deeplab is 2D-based. The 3D Coarse2Fine with Bayesian structure is employed to compare with uncertainty based method and we denote it as Bayesian neural network (BNN) afterwards. We compare our method with three baseline methods. Two of them are based on uncertainty and the last one directly applies regression network on the prediction mask to regress quality in equation.• Entropy Uncertainty. BID5 calculated the pixel-vise predictive entropy using Bayesian inference. Then, the uncertainty is summed up over whole imagxe to get the doubt score and the doubt score would replace the shape feature in to regress the quality. They sum is weighed by the distance to predicted boundary, which somehow alleviates the bias distribution of uncertainty. Their method is done in 2D image and here we just transfer it to 3D image without essential difficulty.• Aleatoric and Epistemic Uncertainty. BID9 divided the uncertainty into two terms called aleatoric uncertainty and epistemic uncertainty. We implement both terms and calculate the doubt score in the same way as BID5 because the original paper does not provide a way. The two doubt scores are used in predicting the quality.• Direct Regression. A regression neural network is employed to directly learn the quality of predictive mask. The training data for this network is the prediction of segmentation algorithm F on X t and the real Dice's coefficient between the predictive mask and label mask is used as the supervision. For data pre-processing, since the voxel size varies from case to case, which would affect the shape of pancreas and prediction of segmentation, we first re-sample the voxel size of all CT scans and annotation mask to 1mm×1mm×1mm. For training VAE, we apply simple alignment on the annotation mask. We employ a cube bounding box which is large enough to contain the whole pancreas region, centered at the pancreas centroid, then crop both volume and label mask out and resize it to a fixed size 128×128×128. We only employ a simple alignment because the human pose is usually fixed when taking CT scan, e.g. stance, so that the organ will not rotate or deform heavily. For a segmentation prediction, we also crop and resize the predictive foreground to 128 × 128 × 128 and feed it into VAE to capture the shape feature. During the training process, we employ rotation along x,y,z axes for −10,0,10 degree respectively and random translation for smaller than 5 voxel on annotation mask as data augmentation. This kind of mild disturbance can enhance the data distribution but keep the alignment property of our annotation mask. We tried different dimension of latent space and finally set it to 128. We found that with VAE with latent space of different dimension will have different capability in quality assessment. The hyper parameter λ in object function of VAE is set to 2 −5 to balance the small value of Dice Loss and large KL Divergence. We trained our network by SGD optimizer with batch size 4. The learning rate for training VAE is fixed to 0.1. We build our framework and other baseline model using TensorFlow. All the experiments are run on NVIDIA Tesla V100 GPU. The first training step is done in total 20000 iterations and takes about 5 hours. We split NIH data into four folds and three of them are used for training segmentation algorithms and our pipeline; the remaining one, together with all training data from MSD and MLC forms the validation data to evaluate our evaluation method. First we learn the parameter of VAE using the training label of NIH dataset. Then we choose BNN as the preparation algorithm. The training strategy in section 3.3 is applied on it to learn the parameters of regression. For all the baseline methods, we employ the same training strategy of jackknifing as in our method and choose the BNN as preparation algorithm for fair comparison. Finally we predict the quality of predictive mask on the validation data for all the segmentation algorithms. Note that all segmentation algorithms are trained only on the NIH training set. Table 1 reports the of using three baseline models and our method to evaluate the BNN model tested on three datasets. In general, our method achieves the lowest error and variance on all datasets. In our experiment, the BNN achieves 82.15, 57.10 and 66.36 average Dice score tested on NIH, MSD and MLC datasets respectively. The segmentation algorithm trained on NIH will fail on some cases of other datasets and that is why we need the alarm system. The spearman coefficient for direct regression method on NIH dataset is close to 0 because the testing on NIH are all of high quality and the regression is not sensitive to slight variation in quality. Uncertainty based methods can better predict the quality but as shown in Figure 1, the uncertainty mainly distributes on the boundary of predictive mask but not on the missing parts or false positive parts. When the BNN is tested on the other two datasets, our method remains stable in predicting the quality. Table 2 shows the quality assessment for 4 different segmentation algorithms. For each segmentation algorithm, When evaluating the segmentation from DeepLab algorithm tested on MLC dataset, the accuracy is lower but the correlation between the predictive quality and real quality is high. In the paper we present a VAE based alarm system for segmentation algorithms which predicts the qualities of the segmentation without using ground truth. We claim that the shape feature is useful in predicting the qualities of the segmentation . To capture the shape feature, we train a VAE using only ground truth masks. We utilize the fact that rare events will achieve larger value for loss function, and successfully detect the out-of-distribution shape according to the value for loss function in the testing time. In the second step we collect the segmentation of the segmentation algorithm on the training data and extract the shape feature of them to learn the parameters of regression. By applying jackknifing training on the segmentation algorithm, we will get segmentation of different qualities on the training data, therefore obtain more accurate regression parameters. The reliable quality assessment prove both that the shape feature capturing from VAE is meaningful and that the shape feature is useful for quality assessment in the segmentation task. Furthermore, our proposed method outperforms the uncertainty based methods and direct regression method, and possesses better transferability to other datasets and other segmentation algorithms.
We use VAE to capture the shape feature for automatic segmentation evaluation
974
scitldr
The linear transformations in converged deep networks show fast eigenvalue decay. The distribution of eigenvalues looks like a Heavy-tail distribution, where the vast majority of eigenvalues is small, but not actually zero, and only a few spikes of large eigenvalues exist. We use a stochastic approximator to generate histograms of eigenvalues. This allows us to investigate layers with hundreds of thousands of dimensions. We show how the distributions change over the course of image net training, converging to a similar heavy-tail spectrum across all intermediate layers. The study of generalization in deep networks has shifted its focus from the skeleton structure of neural networks BID1 to the properties of the linear operators of the network layers BID3 BID4 BID12. Measures like matrix norms, including standard Frobenius, other p-Norms or spectral norm) or stable rank BID2 are important components of theoretical bounds on generalization. All of these measures rely on the singular values of the linear maps A, or equivalently on the eigenvalues of the operator times its transpose AA T. In order to visually inspect the eigenvalue spectrum of a matrix, it is useful to compute a histogram. Histograms allows us to roughly estimate the distribution of eigenvalues and detect properties like decay behavior or the largest eigenvalues. An example of such a histogram is FIG0, that shows the eigenvalues of a Convolution-Layer in a squeeze_net network that maps to a feature map of dimension 256 × 13 × 13 = 43, 264. It shows many interesting, but not wellunderstood characteristic properties of fully trained networks: A Heavy-tail eigenvalue distribution with the vast majority of eigenvalues being near zero, though none are actually zero, and only few spikes of large eigenvalues. Martin and Mahoney show this phenomenon in the linear layer of large pre-trained models BID10, we also show it in the convolution layers and follow its evolution over the course of optimization. Computing singular values exactly is a costly operation, particularly because the dimension in stateof-the art convolutional neural networks often exceeds 100,000. Fortunately when we are interested only in a histogram, we do not need to know the exact eigenvalues, but only the number of eigenvalues that fall into the bins of the histogram. Based on the decay property exhibited in deep networks, we propose an approach for estimating histograms of eigenvalues in deep networks based on two techniques: For estimating the few high eigenvalues, called spikes, and particularly the largest eigenvalue, we use ARPACK, a truncated eigenvalue decomposition method that does not require the matrix explicitly, but accesses it only via matrix-vector products. For estimating the remainder, called bulk, we use a method based on matrix Chebyshev approximations and Hutchinson's trace estimator BID11. Like ARPACK, it only accesses the matrix via matrix-vector products. In FIG0, we have colored the bins we computed exactly in red, the approximated are blue. We denote the number of eigenvalues of a symmetric linear operator A: DISPLAYFORM0 The highest-dimensional linear operators in deep networks used in production environments are probably convolution layers. These layers transform feature maps with number of raw-features in the input-and output feature often exceeding 100k. This is feasible because the convolution operator is not implemented as matrix-vector multiplication with dense weight matrices, but specialized and highly-optimized convolution routines are used. Really every reputable deep learning software framework provides these routines. We can use these same routines when we estimate the eigenvalues of the linear maps of network layers. We first make sure that the network layer does not add a bias term BID0. Now let H be the linear map of a neural network layer. The forward-pass of that layer computes Hx efficiently, whereas the backward-pass computes H T y with backward-flowing gradient information y. We are interested in the eigenvalues of HH T, hence to compute HH T y, we first pass y through the backward pass of H, and pass the ing gradient through the forward pass to obtain the ing vector. To estimate the spikes and particularly the largest eigenvalue, we use the implicitly restarted Lanczos method as implemented in the ARPACK software package. It computes a truncated eigenvalue decomposition for implicit matrices that are accessed via matrix-vector products BID9. We specify a number of spikes T > 0 and compute the first T eigenvalues with ARPACK. From the largest eigenvalue, we derive the equidistant histogram-binning over the range [0, λ 1]. We use a technique for stochastically estimating eigenvalue counts proposed by Napoli et al. BID11. It requires that all eigenvalues fall into the range [−1, 1]. Hence, we first transform the matrix via A → (2λ −1 1 A − I) since we already know λ 1 from the ARPACK-based spike estimator. We define the indicator function δ l,u (λ) that is 1 iff. l ≤ λ < u and notice that we can write the number of eigenvalues in [l, u) as DISPLAYFORM0 We can approximate δ l,u with Chebyshev polynomials of a fixed degree DISPLAYFORM1 is the kth Chebyshev basis and b k ∈ R its corresponding coefficient. These coefficients are DISPLAYFORM2 Compute coefficients b k for k = 0,..., K according to BID0. DISPLAYFORM3 known for the indicator function BID11: DISPLAYFORM4 Now we can rewrite the count as the trace of a polynomial matrix function 2 applied to our matrix of interest, as it holds that trf DISPLAYFORM5 where Φ k (A) is the kth Chebyshev base for matrix functions. This quantity in turn can be approximated using stochastic trace estimators, aka Hutchinson estimators BID7. It holds that trA = E x x T Ax where each component of x is drawn independently from a zero-mean distribution like standard normal or Rademacher. This expression lends itself to a simple sampling algorithm, where we draw S independent x 1,..., x S and estimate DISPLAYFORM6 We do not have to explicitly compute Φ k (A), as only the product Φ k (A)x i is required. Since Chebyshev polynomials by construction follow the recursion Φ k+1 (A) = 2AΦ k (A) − Φ k−1 (A), we derive Algorithm 1 to estimate the count. Our experiments are based on a pyTorch implementation of the proposed histogram estimator. We train a squeeze_net architecure BID8 on imagenet data. After 30 epochs of training, we reduce the learning rate to 10%, and repeat this after another 30 epochs. We train using plain stochastic gradient descent with mini-batches of size 128 and compute histograms for all convolution layers before the first and after every epoch. For the histogram computation, we use a budget of 1000 for the exact computation of eigenvalues and approximate the remainder using the stochastic estimator. We present some histograms in FIG2 for the first and last convolution layers BID2. The histograms of the other layers show similar behavior as the last layer, for instance FIG0 shows an intermediate layer after the first epoch of training quite similar to FIG2, but with less extreme decay. BID1 A real-valued function f (x): R → R has a corresponding matrix function f (A): R m×m → R m×m and the eigenvalues of f (A) are f (λ1),..., f (λm). For polynomials, we get this matrix function by replacing scalar multiplications with matrix multiplications and scalar additions with matrix additions. For other classes of functions and a comprehensive introduction to matrix functions see Highham's book BID6. BID2 Addional and animated histograms are available at https://whadup.github.io/Resultate/, however note that the website is not sufficiently anonymized for double-blind reviewing. Proceed with caution. Like Martin and Mahoney BID10 we identify different phases in the spectograms. Right after initialization, the matrix behaves almost as random matrix theory suggests given the element-wise independent random initialization with Gaussian random variables BID13. This can be observed in FIG2. However, we note that on the first layer Fig. 3a, there are some unexpected bumps in the histogram. We conjecture that this may be due to padding in the convolutions. As optimization commences, we start to see heavytail behavior. Already after one epoch of traning, the largest eigenvalues have seperated from the bulk, while the majority of eigenvalues remains in the same order of magnitude as before training. This can be seen for the first and large convolution layer in FIG2. The bumps in the first layer smooth out a little. Over the course of the first 30 epochs, the largest eigenvalues grow steadily as the tail of the spectrum grows further. Then as soon as the learning rate is reduced to 10%, the operator norms of the linear maps start to decrease as depicted in FIG1. Considering the importance of the operator norm in known generalization bounds for feed-forward networks, this suggests that some sort of regularization is happening. The bumps in the first layer smooth out further, but remain visible. The last layer for the most part keeps its shape in the last 60 epochs, beside the reduction of the norm we notice that the largest bar decreases in size from 4157 to 2853 and that the difference seems to move to the other bars in the blue portion of the histogram. Understanding the structure in the linear transformations might be an important aspect of understanding generalization in deep networks. To this end we have presented a stochastic approach that allows us to estimate the eigenvalue spectrum of these transformations. We show how the spectrum evolves during imagenet training using convolutional networks, more specifically squeeze_net networks. In the future we want to apply similar approaches to estimating the covariance structure of the intermediate feature representations and investigate the relations between covariance matrices and parameter matrices. Since the estimator we use is differentiable BID5 BID0, it may be interesting to investigate its usefulness for regularization.
We investigate the eigenvalues of the linear layers in deep networks and show that the distributions develop heavy-tail behavior during training.
975
scitldr
Capturing long-range feature relations has been a central issue on convolutional neural networks(CNNs). To tackle this, attempts to integrate end-to-end trainable attention module on CNNs are widespread. Main goal of these works is to adjust feature maps considering spatial-channel correlation inside a convolution layer. In this paper, we focus on modeling relationships among layers and propose a novel structure,'Recurrent Layer Attention network,' which stores the hierarchy of features into recurrent neural networks(RNNs) that concurrently propagating with CNN and adaptively scales feature volumes of all layers. We further introduce several structural derivatives for demonstrating the compatibility on recent attention modules and the expandability of proposed network. For semantic understanding on learned features, we also visualize intermediate layers and plot the curve of layer scaling coefficients(i.e., layer attention). Recurrent Layer Attention network achieves significant performance enhancement requiring a slight increase on parameters in an image classification task with CIFAR and ImageNet-1K 2012 dataset and an object detection task with Microsoft COCO 2014 dataset. Concatenating all features in the order of layers in convolutional neural network (CNN) provides new interpretation, features form a sequence, consisting of features with small receptive fields to large receptive fields. Interestingly, recurrent neural network (RNN) is one of the representatives for modeling the sequential information. . On the other hands, Recent attempts to utilize attention mechanism for empowering CNN a better representational power are prevalent (b;). Motivated by intrinsic characteristics of CNN and RNN, and recent attention works on computer vision, we present the Recurrent Layer Attention Network (RLA network), which is differentiable and light-weight while improving the representational power of CNN by a slightly different way from other attention works in computer vision. The main goal of our work is applying global weights balance among layers by inheriting the feature hierarchy from previous CNN layers. We accomplish the goal by two main structural designs: employing our inter-layer attention mechanism to make the network re-adjust features, and utilizing RNN to memorize the feature hierarchy with concurrently propagating parallel with CNN. We hypothesize that RLA network gains additional class discriminability through inheriting the informative feature hierarchy, such as repetitive appearances of important features or relevant features with different receptive fields. For example, our network raises the activation of neurons that is responsible to the whole body of zebra, using a history of features of relatively smaller receptive field. We demonstrate our hypothesis through the Grad-CAM visualization of intermediate features and corresponding layer attention value (i.e., the importance of layer.) We evaluate RLA network on image classification and object detection tasks using benchmark datasets: CIFAR, ImageNet-1K, and Microsoft COCO. On both tasks, RLA network gets comparable with the state-of-the-arts, Squeeze-and-Excitation network (b), and superior than original ResNet architecture. Moreover, we suggest the compatibility of the RLA network by introducing the expansion of RLA network combining our inter-layer attention mechanism toward recent attention works (b) on ablation study. Incorporating RLA network to recent attention works (Call these networks as intra-layer attention networks), further variations of RLA network can be recognized as the generalized attention network considering correlations among both inside and outside a layer. We summarize our contributions as follows: • We propose two new concepts: the weight balancing of CNN features along layers (call it inter-layer attention), and the connection from shallow CNN layers to deep CNN layers through concurrently propagating RNN. • We demonstrate the effectiveness of proposed RLA network for an image classification task and an object detection task on benchmark datasets. RLA network achieves similar or superior compared with leading deep network while requiring small model parameters increase. • Ablation studies investigate the effectiveness of two proposed concepts and show the compatibility towards existing intra-attention models, and the further expandability by tuning architectural designs. We show and discuss how RLA network is learned to interpret images by visualizing intermediate layers and plotting layer attention values. • RLA network is easy to implement requiring basic operations of modern deep learning frameworks. Attention mechanism in computer vision. Attention mechanism can be interpreted as a methodology to bias the allocation of available neurons to the most informative components of input signal (b). Recent main application of attention mechanism on computer vision is integrating end-to-end trainable attention module for deep CNNs. These attention modules are divided into two categories; spatial attention module, and channel-wise attention module. Spatial attention module learns spatial masks on feature maps for regulating the activations of neurons (; a). Channel-wise attention module earns channel-wise distribution and then utilizes it to refine feature maps (b; a). Several architectural designs both benefitting spatial and channel-wise attention are also stressed out (; b; a; b). However, all these spatial and channel-wise attention modules, as we call intra-layer attention, only handles interaction inside a CNN layer. Therefore, currently suggested modules structurally lack to model relation among visual features captured in different CNN layers. Therefore, we suggest inter-layer attention mechanism. Through global weight balancing along CNN layers, our module models complex relationships among visual features of different receptive fields. RNN in computer vision. Numbers of computer vision papers utilize RNN for various tasks containing visual question answering (a;), image captioning , multi-object classification (a;), and video description . They share common ground on the way that exploiting RNN; extracting visual features from images using pre-trained CNNs, then employ RNN to model sequential procedures of each task. For example, obtained CNN features per subsequent time frame series are becoming inputs to belonging RNN units, and RNN conduct a prediction over video recognition and description tasks . However, there are only few works to exploit RNN for directly enhancing Figure 2: The structure of Recurrent Layer Attention network (* = 1), RLA-vector (* = C l), and IA+RLA (including a dotted line). ⊗ denotes scale operations belonging to intra-layer attention. the performance of deep neural networks on vision tasks that do not require sequential decisions such as image classification or single object detection. One trial propose the deep network that employs RNN and region attending mechanism, but is non-differentiable. In contrast to earlier works, we manipulate CNN and RNN to simultaneously affect to each other at every layer of CNN so that enhancing the representational power of CNN. To the best of our knowledge, our concept of conveying information from shallow CNN layer to deep CNN layer through the use of concurrently propagating memory units is the first attempt. We start by defining several expressions to reduce unnecessary ambiguity. In our works, local patterns denote class-specific characteristics in images such as a head of a cat, which are collected as a response of the convolution operation. G denotes a convolution operation alone, or the arbitrary combinations of convolution operations and other mapping functions, such as residual block or inception Module . Features are the of convolution operation G. RLA network consists of three operations: summarizing features, storing summarized features in LSTM, and inferring the layer attention. Right after the convolution operation G, RLA module "summarizes" the feature volume into a statistic, which is a bag of the local patterns in images. Then it "stores" the context in LSTM hidden units, and finally "infers" the layer attention of how much to concentrate on feature volume from G. The concept figure describing this procedure is in Figure 1. The intuition of utilzing RNN is derived from recognizing concatenated feature volumes along CNN layers as a sequence. Because each element of the feature sequence is a prior information implying which local patterns emerged, we hypothesize that LSTM hidden units which is carrying the hierarchy of features can provide meaningful information to CNN such as the importance of latter layer. Therefore, we utilize the attention mechanism to empower CNN reducing the activation of overlapped local patterns, or emphasizing the activation of important local patterns which highly contribute to the class discriminability. Following subsections are organized as follows. The formulation of RLA network is described in Subsection 3.1, several architectural derivatives for showing the expandability and the compatibility toward intra-attention modules are introduced in Subsection 3.2, and the light-weightness of the RLA network is described in Subsection 3.3. We present our RLA network by describing one forward-passing procedure of l-th layer. The feature volume A l is yielded by passing G l: Summarizing Feature. Feature volume A l is summarized as the context. Here, the context S l ∈ R N signifies a simplified representation of local patterns in the image, and is computed at every layer of CNN through: where F context is a combination of feature volume summarizing operation and consequent downsampling (or up-sampling) operation. We investigate various types of feature summarizing operations for finding proper statistics that embeds the feature volume on Section 3.2. In order to feed S l into LSTM cell, down-sampling and up-sampling techniques must be adaptively applied per layers, because RNN requires a fixed size of input to each cell. Note here, all the same feature summarizing operation is used on every layer. Storing Context. The context captured in each earlier layer of CNN is recurrently inserted into LSTM hidden units h l ∈ R M, and selectively embedded in h l through the updating functions of LSTM with h l−1: Here, F LST M is a LSTM cell updating operation described in . The LSTM hidden states and cell states of the first stage, h 1 is initialized with zero. Inferring Layer Attention. Considering the sequence of contexts previously fed into LSTM through the cascades of convolution operations and memory mechanism of LSTM until l-th layer, h l infers the layer attention, α l ∈, the scalar value for scaling the feature volume A l: where F att is composed of two fully-connected layers with r. δ and σ refer to the ReLU and sigmoid operations, respectively. Reduction ratio r denotes compressing ratio of the first fully-connected layer. Finally, the scaled feature volumeà l is computed by element-wise multiplication F scale between α l and all elements of A l: We introduce several derivatives of RLA network based on two key design variables, the attention structure and the context. All instances are experimented and discussed on Subsection 4.1. Adoption of intra-layer attention mechanism. We show the compatibility of RLA network toward intra-layer attention mechanism. Adopting the intra-layer attention module, the scale operation for c-th channel of feature volume on l-th layer is given by: where A c l denotes a c-th feature map of, and F scale denotes an arbitrary intra-layer attention mechanism which calibrate a feature volume inside the layer. Figure 2 depicts the integration of RLA with intra-layer attention module. We call this structure as Intra-layer Attention + RLA network (IA+RLA). To evaluate the performance of IA+RLA, we select Squeeze-and-Excitation block (b) as IA module. Distilling advantages from feature hierarchy. RLA network inherits two advantages: utilizing feature hierarchy to affect on subsequent features using RNN, and adaptively scaling features layer by layer. Because both two factors are latently contributing on guiding CNN to have better representational power, it's hard to discriminate effects of them. To evaluate different effectiveness of two factors, we additionally design a structure named RLA-vector. Between previously mentioned two concepts, RLA-vector only adopt the concept of exploiting feature hierarchy with concurrently propagating RNN, while not applying inter-layer attention mechanism. Utilizing the intra-layer attention mechanism in Squeeze-and-Excitation module (SE module) (b), RLA-vector adjust feature volumes channel-wisely. In the other words, LSTM hidden units of RLA-vector predict a channel-sized vector, not a layer attention scalar. Through the comparison between Squeeze-andExcitation block that does not inherits a feature hierarchy and RLA-vector, we can examine a pure impact on the usage of co-propagating RNN. Context: simplified representation of feature volume. We focus on finding the context, a summarized feature volume, that assures preserving the information of local patterns during feature summarization procedure. Here, we concentrate on employing statistics that only summarize in spatial (H × W) dimensions of the feature volume, while not exploiting the channel-wise summary of feature volume. This intuition comes from the existence information of local patterns is less related to it's location. First, we adopt global average pooled feature and global max pooled feature introduced in (b;) as contexts. Correspondingly, they summarize the spatial dimensions of the feature volume using average pooling and max pooling. We call these statistics as global average pooled features (GAP) and global max pooled features (GMP). The meaning of exploiting GAP for our works is considering the size or multiple appearances of local patterns in spatial dimensions of feature volume. On the other hand, GMP is used for catching the most salient part of local patterns. Second, we additionally apply non-linear operations on GAP and GMP by attaching two fully connected layers, and call them as GAP+MLP and GMP+MLP. These statistics are for modeling the multiple appearance of similar local patterns at different channels. Seemingly, these statistics have same structure with SE module (b). However, the goal for exploiting non-linearity is different. We use non-linearity for producing more informative context, while SE network is for predicting channel-wise scaling coefficients of the feature volume. Additional model parameters required for utilizing RLA module is given by: where M is number of neurons in LSTM cell, N is a dimension of context, L is a number of layers in CNN, and r denotes the reduction ratio. As the formula implies, the additional model complexity caused by exploiting RLA module doesn't depend on the depth of backbone CNN. This is by the virtue of weight-sharing property from RNN. In contrast to ResNet-56 with SE module causes 1.51% parameters increase, ResNet-56 with our module (RLA) only increases 0.14% parameters compared to original ResNet-56. In this section, we conduct ablation experiments for analyzing proposed instances and introduce experimental on CIFAR and ImageNet-1K datasets for an image classification task, and Microsoft COCO datasets for an image detection task. We adopt ResNet as a backbone CNN architecture for all experiments. Ensuring a fair comparison with our works, we re-implement the of ResNet and Squeeze-and-Excitation network. Considering the readibility, implementation details over experiments are described on Appendix A and the parametric study for examining the effects of hyper-parameters (i.e., M, N, r) are reported on Appendix B. The main goal of ablation study is demonstrating the expandability of RLA network. Key concepts illustrated in this part are as follows: the compatibility of RLA toward intra-layer attention mechanism, the impacts of inheriting feature hierarchy using RNN for empowering CNN to have better representational power, the effectiveness of introduced statistics for context. We adopt ResNet-56 as backbone structure for the ablation study. Adopting Channel Attention. Table 1 shows every instances outperform ResNet-56 on CIFAR-10 and CIFAR-100 dataset. Interestingly, we find out a IA+RLA, the integrated structure of intra-layer attention and our approach, works worse than SE and RLA alone, but still better than ResNet. Note here, these might have different tendency belonging to the backbone structure and dataset. The reason for that is two designs are on the complementary positions. Specifically, intra-layer attention mechanism adjusts features with similar size of receptive fields, while inter-layer attention mechanism focus on calibrating features with different size of receptive fields. We leave further experiments and analysis on this observation as future works. Distillation of Feature Hierarchy. Supporting a key design concept of RLA network, the exploitation of concurrently propagating RNN, we observe RLA-vector significantly works well compared to SE network. These experimental claims that utilizing the feature hierarchy through RNN alone also remarkably aids CNN to interpret an image better. This is interesting since it suggests the potential that the proposed concept of inheriting feature hierarchy through RNN can be solely applied to other neural network architectural designs. Utilizing Various Context. Various feature summary statistics for the context differently affects on performance. Similar to the reported in (b;), GMP reports higher error than GAP. Interestingly, the on GAP+MLP and GMP+MLP are different depending on the dataset. Through this observation, we could recognize considering multiple appearances of local patterns on different channels in feature volume has negligible impact. We confirm GAP is the most appropriate context considering the performance increase and the model complexity. CIFAR dataset is consist of 50K training images and 10K validation images. We train networks for 160 epochs with the learning rate is initialized with 0.1 and divided by 10 when reached 80 and 120 epochs. The training and validation curve over CIFAR dataset are depicted in Figure 3. Through the optimization schedule, we observe that RLA network achieves lower training/validation error. ImageNet-1K 2012 dataset is comprised of around 1.28M training images and 50K validation images. We train networks for 100 epochs on ImageNet-1K 2012 dataset with the learning rate is set to 0.1 and divided by 10 every 30 epochs. Data augmentation and optimization details follows the methods that ResNet adopts. More implementation details are discussed on Appendix A. Table 2 (top-right) describes the experimental . We notice that RLA network outperforms ResNet, but ranks below the SE network on performance. We interpret these come from structural limitation of RLA module, when applied to backbone CNNs with large channels. Giving an example of ResNet-50 as a backbone architecture, RLA scale the network only using 16 scale coefficients (total numbers of residual blocks), while SE module adjusts the network using 15104 scale coefficients (total numbers of adjusting channels over all residual blocks). As mentioned in ablation study, we expect the integration of intra-layer mechanism and RLA network produces better . Microsoft COCO 2014 dataset contains 83K training images and 41K validation images. We adopt Faster R-CNN (b) as a detection method and replace the baseline network from ResNet to RLA network. We train networks for 490K iterations with ImageNet-1K 2012 pre-trained ResNet and RLA networks. More implementation details are discussed on Appendix A. Table 2 (bottom-right) summarizes the experimental . Here, we observe the improvements from original ResNet baseline, verifying a generalization performance on the object detection task. Grad-CAM visualziation. We hypothesized that RLA network gains additional class discriminability by considering repetitive appearances of unimportant features or relevant features with different receptive fields. For demonstrating above hypothesis and semantically understanding the way how RLA network learned, we applied Grad-CAM visualization over intermediate CNN layers. This visualization technique is of interest to our work since it provides a measure of importance of each pixel in a feature map towards the overall decision of the CNN . Grad-CAM visualization is generally applied for the last CNN layer because the last CNN layer is normally recognized as the most semantically salient layer. Instead, we apply the Grad-CAM technique for the layers which are located directly before scale operation with layer attention We select three target class having local patterns of different visual characteristics; racer, ostrich, freight car. Here, different visual characteristics signify different receptive fields of seemingly salient local patterns for discriminating target class. Sorting in ascending order, we recognize racer class has local patterns of smallest receptive fields such as a wing, wheels, ostrich class has local pattern of medium receptive field, head. Third, freight car class seems to have no specific local pattern or parts by itself. As , we observe following interesting facts. First, RLA network learns to enhance the layer attention value on layers that catch more seemingly salient features for each target class. The layers marked with red border denote layers that achieve the highest layer attention value. The first layer that captures simple pattern or small parts of racer class, the fifth layer that focus to catch head part of ostrich class, the seventh layer that captures global information or relations among locally ambiguous features for freight car class belongs that category. Through these observation, we hypothesize that RLA network emphasize the layer that catches class-specific local patterns for each target class. Second, deep layers of RLA network tend to learn class-agnostic features after acquiring class-specific or semantically salient features in previous layers. The layers marked with yellow border denote a representative that less focus on class-specific local patterns in the images. Giving an example of ostrich class, the last layer of RLA network tend to learn class-agnostic local patterns, . More visualizations on different target classes are on Appendix E. Comparing with original ResNet keeps allocating more class-specific neurons until deep layers, these observations are quite interesting. We additionally note that distinctly different highlighting pattern between close layers is because the residual architecture learns the'residual' while keeping previously obtained features in earlier layers. Furthermore, This observation accords with the investigation about the class selectivity on Gather-Excite network (a); the intra-attention mechanism applied CNN has lower class selectivity in deeper layers compared to original backbone. We understand this finding is identically applied to our inter-attention mechanism (i.e., RLA network) and our support the of previous work. Mean layer attention curve. Figure 5 shows a mean layer attention curve along RLA network layers. Investigation on layer attention values over RLA network enable deeper understanding on the residual architecture, and traditional debates on CNN. First, layer attention values are relatively high in shallow layers and rather lower values in middle layers. We recognize this observation could be another way for further understanding residual architecture. The intuition behind the residual learning is to let the layers learn the perturbations with reference to an identity function, and original paper supports their intuition via displaying the standard deviation of feature responses along layers. Here, feature responses are defined as the outcomes (batch-normalized) of residual blocks. Comparing with that, we notice that our plot provides similar information but more straightforward. Through the observation on layer attention values decrease along layers, we could reason the intuition of the residual learning as latter layers learn less important layers. Second, the variance of layer attention values increase along the layers except the last residual block, and layers in last residual block of RLA network has the biggest layer attention values that close to 1. We interpret traditional debates on CNN, shallow layers in CNN learn simple or "low-level" features while deep layers in CNN learn powerful, semantic, or "high-level" features supports these observation. Supported by previous investigation, we think it's natural that RLA network learns large layer attention close to the latest CNN layers which are semantically salient, and layer attention values drastically vary on deep layers that contains abstract features. In this paper, we first propose inter-layer attention mechansim to enhance the representational power of CNN. We structurized our mechanism as'Recurrent Layer Attention network' by utilizing two new concepts: the weight balancing of CNN features along layers and the link from shallow CNN layers to deep CNN layers via RNN for directly conveying the feature hierarchy. We introduce structural derivatives of RLA network:'IA+RLA' for proving an applicability of our work toward recent intra-layer attention mechanism, and'RLA-vector' for distilling the impacts of proposed two new concepts. We also precisely select statistics for the context by focusing local patterns preserved in feature summarization procedure. We evaluate RLA network using CIFAR and ImageNet-1k 2012 datasets for an image classification task, and also verify it's generalization ability toward an object detection task via experiments on Microsoft COCO dataset. For demonstrating our hypothesis that RLA network gains additional class discriminability and semantically understanding how RLA network induce their model parameters to be learned, we visualize RLA network utilizing Grad-CAM visualization, plot the layer attention value curve, and report several interesting findings. For future works, we are planning to integrate our inter-layer attention mechanism to intra-layer attention mechanism with heavier experiments first, and to utilize the concept of making any kinds of arbitrary connection from earlier layer to latter layer through RNN in other domains. Image recognition task. Data augmentation and optimization details follow. We exploit scale augmentation, random-size cropping, and random horizontal flipping, not using the color augmentation. For optimization, we use nesterov SGD with momentum 0.9 with weight decay of 0.0001, and initialize weights by the strategy described in. Object detection task. For fast implementation on object detection tasks, we utilize ImageNet-1K pretrained ResNet, SE networt, and RLA network. Implementation details follows the works investigating the fast implementation of faster ). We present hyperparameters for designs of RLA such as a reduction ratio r of layer attention, a dimension of context N, the number of neurons M in LSTM cell. Since hyper-parameters are highly related to model complexity, it's a crucial issue to select them. In this section, we explain the standard to select hyper-parameters and demonstrate effects on the performance of the network. Dimension of Context. Because feature volumes over the layers of CNN have different number of channels, summarized feature volume must be downsampled or upsampled for being inserted into LSTM. However, these operations cause an unintended lack or lost of information. Therefore, we select a dimension of context as a minimum number of downsampling or upsampling to be executed. Size of LSTM Units. In our works, the size of LSTM hidden units, M, highly affects model and computational complexity. Following other works to find proper RNN hidden units size by rule-ofthumb or conducting experiments , we perform experiments for finding the trade-off among the performance and complexities. Table 3 shows experimental on varying M ∈. Surprisingly, the performance of RLA does not drop as M decrease, rather increase. We deduce the reason of this observation is because the distributions of contexts are clearly separable, only small numbers of LSTM hidden units are enough for reducing considerable bias. Reduction Ratio. Neither the reduction ratio, r, has golden rule as hidden units. Accordingly, we conduct experiments on the reduction ratio r ∈, and found out there exists only slight differences on top-1%-error contrast to experimental with varying hidden units. Therefore, we compress model complexity of RLA network by choosing small M which also affects model performance, and further regulate the complexity using reduction ratio. Exploiting big M on RLA-vector. Following interesting tendency that smaller M induce performance increase, we conduct an ablation experiments with M = 4. However, we found that RLA-vector with small M yields prominent performance decrease. We interpret this as, reduced model capacity by decreasing M can not scale channel-sized vector of each layer in CNN. For the fair comparison with SE network, we set M = 32 considering that the average size of global average pooled feature volumes also equals to 32 in SE-ResNet-56. hyperparameter selection in ImageNet-1K experiments. We observe the tendency on performance by changing hyperparameter in parametric study. However, This tendency does not always fit in other dataset too. Considering the data diversity of ImageNet-1K 2012 datasets, we exploit commonly used RNN hidden units size for ResNet-56, M = 512, as exploited in previous CNN-RNN paper . We note that further consideration on differing LSTM hidden units size M possibly reports better performance even with smaller parameters, as shown in the parametric study on CIFAR dataset. About reduction ratio, we apply r = 8 for RLA-18/50 and r = 16 for RLA-101. We describe the location of connection between CNN and RNN in RLA-50 network. When exploiting vanila CNN, applying RLA to backbone CNN is straightforward, making connections with concur-rently propagating RNN just after every convolution layer of the network. However, it's confusing to apply RLA when using residual architecture. Table 4 shows the structure of backbone CNN using ResNet-50 and ResNet-50 + RLA. Note here, RNN that concurrently propagating with CNN is not depicted for ResNet-50 + RLA. In short, RLA network stores the context and infers the layer attention for scaling features per each residual block. For aiding semantic understanding of how RLA network induces their model paramters to be learned, we provide Grad-CAM visualization of the intermediate feature maps over the CNN layers. We could find out similar observations debated on the main article on other target class; the tendency to enhance the layer attention value of layers such that catches the most semantically important visual features, and another tendency to learn non class-specific features in deep layers. Figure 6 depicts visualization on the target class of junco, achidna, killer whales, leonberg, tiger cat, sleeping bags.
We propose a new type of end-to-end trainable attention module, which applies global weight balances among layers by utilizing co-propagating RNN with CNN.
976
scitldr
We seek to auto-generate stronger input features for ML methods faced with limited training data. Biological neural nets (BNNs) excel at fast learning, implying that they extract highly informative features. In particular, the insect olfactory network learns new odors very rapidly, by means of three key elements: A competitive inhibition layer; randomized, sparse connectivity into a high-dimensional sparse plastic layer; and Hebbian updates of synaptic weights. In this work we deploy MothNet, a computational model of the moth olfactory network, as an automatic feature generator. Attached as a front-end pre-processor, MothNet's readout neurons provide new features, derived from the original features, for use by standard ML classifiers. These ``insect cyborgs'' (part BNN and part ML method) have significantly better performance than baseline ML methods alone on vectorized MNIST and Omniglot data sets, reducing test set error averages 20% to 55%. The MothNet feature generator also substantially out-performs other feature generating methods including PCA, PLS, and NNs. These highlight the potential value of BNN-inspired feature generators in the ML context. Machine learning (ML) methods, especially neural nets (NNs) with backprop, often require large amounts of training data to attain their high performance. This creates bottlenecks to deployment, and constrains the types of problems that can be addressed. The limited-data constraint is common for ML targets that use medical, scientific, or field-collected data, as well as AI efforts focused on rapid learning. We seek to improve ML methods' ability to learn from limited data by means of an architecure that automatically generates, from existing features, a new set of class-separating features. Biological neural nets (BNNs) are able to learn rapidly, even from just a few samples. Assuming that rapid learning requires effective ways to separate classes, we may look to BNNs for effective feature-generators. One of the simplest BNNs that can learn is the insect olfactory network, containing the Antennal Lobe (AL) and Mushroom Body(MB), which can learn a new odor given just a few exposures. This simple but effective feedforward network contains three key elements that are ubiquitous in BNN designs: Competitive inhibition, high-dimensional sparse layers [7; 8], and a Hebbian update mechanism. Synaptic connections are largely random. MothNet is a computational model of the M. sexta moth AL-MB that demonstrated rapid learning of vectorized MNIST digits, with performance superior to standard ML methods given N ≤ 10 training samples per class. The MothNet model includes three key elements, as follows. (i) Competitive inhibition in the AL: Each neural unit in the AL receives input from one feature, and outputs not only a feedforward excitatory signal to the MB, but also an inhibitory signal to other neural units in the AL that tries to dampen other features' presence in the sample's output AL signature. (ii) Sparsity in the MB, of two types: The projections from the AL to the MB are non-dense (≈ 15% non-zero), and the MB neurons fire sparsely in the sense that only the strongest 5% to 15% of the total population are allowed to fire (through a mechanism of global inhibition). (iii) Weight updates affect only MB→Readout connections (AL connections are not plastic). Hebbian updates occur as: ∆w ij = αf i f j if f i f j > 0 (growth), and ∆w ij = −δw ij if f i f j = 0 (decay), where f i, f j are two neural firing rates (f i ∈ MB, f j ∈ Readouts) with connection weight w ij. In this work we tested whether the MothNet architecture can usefully serve as a front-end feature generator for an ML classifier (our thanks to Blake Richards for this suggestion). We combined MothNet with a downstream ML module, so that the Readouts of the trained AL-MB model were fed into the ML module as additional features. From the ML perspective, the AL-MB acted as an automatic feature generator; from the biological perspective, the ML module stood in for the downstream processing in more complex BNNs. Our Test Case was a non-spatial, 85-feature, 10-class task derived from the downsampled, vectorized MNIST data set (hereafter "vMNIST"). On this non-spatial dataset, CNNs or other spatial methods were not applicable. The trained Mothnet Readouts, used as features, significantly improved the accuracies of ML methods (NN, SVM, and Nearest Neighbors) on the test set in almost every case. That is, the original input features (pixels) contained class-relevant information unavailable to the ML methods alone, but which the AL-MB network encoded in a form that enabled the ML methods to access it. MothNet-generated features also significantly out-performed features generated by PCA (Principal Components Analysis), PLS (Partial Least Squares), NNs, and transfer learning (weight pretraining) in terms of their ability to improve ML accuracy. These indicate that the insect-derived network generated significantly stronger features than these other methods. To generate vMNIST, we downsampled, preprocessed, and vectorized the MNIST data set to give samples with 85 pixels-as-features. vMNIST has the advantage that our baseline ML methods (Nearest Neighbors, SVM, and Neural Net) do not attain full accuracy at low N. Trained accuracy of baseline ML methods was controlled by restricting training data. Full network architecture details of the AL-MB model (MothNet) are given in. Full Matlab code for these cyborg experiments including comparison methods, all details re ML methods and hyperparameters, and code for MothNet simulations, can be found at. MothNet instances were generated randomly from templates that specified connectivity parameters. We ran two sets of experiments: Cyborg vs baseline ML methods on vMNIST Experiments were structured as follows: 1. A random set of N training samples per class was drawn from vMNIST. 2. The ML methods trained on these samples, to provide a baseline. 3. MothNet was trained on these same samples, using time-evolved stochastic differential equation simulations and Hebbian updates, as in. 4. The ML methods were then retrained from scratch, with the Readout Neuron outputs from the trained MothNet instance fed in as additional features. These were the "insect cyborgs", i.e. an AL-MB feature generator joined to a ML classifier. 5. Trained ML accuracies of the baselines and cyborgs were compared to assess gains. To compare the effectiveness of MothNet features vs features generated by conventional ML methods, we ran vMNIST experiments structured as as above, but with MothNet replaced by one of the following feature generators: 1. PCA applied to the vMNIST training samples. The new features were the projections onto each of the top 10 modes. 2. PLS applied to the vMNIST training samples. The new features were the projections onto each of the top 10 modes. Since PLS incorporates class information, we expected it to out-perform PCA. 3. NN pre-trained on the vMNIST training samples. The new features were the (logs of the) 10 output units. This feature generator was used as a front end to SVM and Nearest Neighbors only. Since vMNIST has no spatial content, CNNs were not used. 4. NN with weights initialized by training on an 85-feature vectorized Omniglot data set, then trained on the vMNIST data as usual (transfer learning, applied to the NN baseline only). Omniglot is an MNIST-like thumbnail collection of 1623 characters with 20 samples each. (5.) For the baseline NN method, we used one hidden layer. Including two hidden layers did not improve baseline performance. This was an implicit control, showing that MothNet features were not equivalent to just adding an extra layer to a NN. MothNet readouts as features significantly improved accuracy of ML methods, demonstrating that the MothNet architecture effectively captured new class-relevant features. We also tested a non-spatial, 10-class task derived from the Omniglot data set and found similar gains. MothNet-generated features were also far more effective than the comparison feature generators (PCA, PLS, and NN). Gains due to MothNet features on vMNIST ML baseline test set accuracies ranged from 10% to 88%, depending on method and on N (we stopped our sweep at N = 100). This baseline accuracy is marked by the lower colored circles in Fig 1. Cyborg test set accuracy is marked by the upper colored circles in Fig 1, and the raw gains in accuracy due to MothNet features are marked by thick vertical bars. MothNet features increased raw accuracy across all ML models. Relative reduction in test set error, as a percentage of baseline error, was 20% to 55%, with high baseline accuracies seeing the most benefit (Fig 2). NN models saw the greatest benefits, with 40% to 55% relative reduction in test error. Remarkably, a MothNet front-end improved ML accuracy even in cases where the ML baseline already exceeded the ≈ 75% accuracy ceiling of MothNet (e.g. NNs at N = 15 to 100 samples per class): the MothNet readouts contained clustering information which ML methods leveraged more effectively than MothNet itself. Gains were significant in almost all cases with N > 3. Table 1 gives p-values of the gains due to MothNet. Table 1 ). We ran the cyborg framework on vMNIST using PCA (projections onto top 10 modes), PLS (projection onto top 10 modes), and NN (logs of the 10 output units) as feature generators. Each feature generator was trained (e.g. PCA projections were defined) using the training samples. Table 2 gives the relative increase in mean accuracy due to the various feature generators (or to pre-training) for NN models (13 runs per data point). Results for Nearest Neighbors and SVM were similar. MothNet features were far more effective than these other methods. Effect of pass-through AL The MothNet architecture has two main layers: a competitive inhibition layer (AL) and a highdimensional, sparse layer (MB). To test the effectiveness the MB alone, we ran the vMNIST experiments, but using a pass-through (identity) AL layer for MothNet. Cyborgs with a pass-through AL still posted significant improvements in accuracy over baseline ML methods. The gains of cyborgs with pass-through ALs were generally between 60% and 100% of the gains posted by cyborgs with normal ALs (see Table 3), suggesting that the high-dimensional, trainable layer (the MB) was most important. However, the competitive inhibition of the AL layer clearly added value in terms of generating strong features, up to 40% of the total gain. NNs benefitted most from the AL layer. We deployed an automated feature generator based on a very simple BNN, containing three key elements rare in engineered NNs but endemic in BNNs of all complexity levels: (i) competitive inhibition; (ii) sparse projection into a high-dimensional sparse layer; and (iii) Hebbian weight updates for training. This bio-mimetic feature generator significantly improved the learning abilities of standard ML methods on both vMNIST and vOmniglot. Class-relevant information in the raw feature distributions, not extracted by the ML methods alone, was evidently made accessible by MothNet's pre-processing. In addition, MothNet features were consistently much more useful than features generated by standard methods such as PCA, PLS, NNs, and pre-training. The competitive inhibition layer may enhance classification by creating several attractor basins for inputs, each focused on the features that present most strongly for a given class. This may push otherwise similar samples (of different classes) away from each other, towards their respective class attractors, increasing the effective distance between the samples. The sparse connectivity from AL to MB has been analysed as an additive function, which has computational and anti-noise benefits. The insect MB brings to mind sparse autoencoders (SAs) e.g.. However, there are several differences: MBs do not seek to match the identity function; the sparse layers of SAs have fewer active neurons than the input dimension, while in the MB the number of active neurons is much greater than the input dimension; MBs have no pre-training step; and the MB needs very few samples to bake in structure that improves classification. The MB differs from Reservoir Networks in that MB neurons have no recurrent connections. Finally, the Hebbian update mechanism appears to be quite distinct from backprop. It has no objective function or output-based loss that is pushed back through the network, and Hebbian weight updates, either growth or decay, occur on a local "use it or lose it" basis. We suspect that the dissimilarity of the optimizers (MothNet vs ML) was an asset in terms of increasing total encoded information.
Features auto-generated by the bio-mimetic MothNet model significantly improve the test accuracy of standard ML methods on vectorized MNIST. The MothNet-generated features also outperform standard feature generators.
977
scitldr
We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets. In this work, we study a \emph{general} augmentation to feed-forward networks to form anytime neural networks (ANNs) via auxiliary predictions and losses. Specifically, we point out a blind-spot in recent studies in such ANNs: the importance of high final accuracy. In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level. We achieve such speed-up with simple weighting of anytime losses that oscillate during training. We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime at any budget, at the cost of a constant fraction of additional consumed budget. In recent years, the accuracy in visual recognition tasks has been greatly improved by increasingly complex convolutional neural networks, from AlexNet BID8 and VGG BID12, to ResNet BID3, ResNeXt BID14, and DenseNet BID6. However, the number of applications that require latency sensitive responses is growing rapidly. Furthermore, their test-time computational budget can often. E.g., autonomous vehicles require real-time object detection, but the required detection speed depends on the vehicle speed; web servers need to meet varying amount of data and user requests throughput through out a day. Thus, it can be difficult for such applications to choose between slow predictors with high accuracy and fast predictors with low accuracy. In many cases, this dilemma can be resolved by an anytime predictor BID4 BID0 BID16, which, for each test sample, produces a fast and crude initial prediction and continues to refine it as budget allows, so that at any test-time budget, the anytime predictor has a valid for the sample, and the more budget is spent, the better the prediction is. In this work 1, we focus on the anytime prediction problem in neural networks. We follow the recent works BID10 BID15 BID5 to append auxiliary predictions and losses in feed-forward networks for anytime predictions, and train them jointly end-to-end. However, we note that the existing methods all put only a small fraction of the total weightings to the final prediction, and as a , large anytime models are often only as accurate as much smaller non-anytime models, because the accuracy gain is so costly in DNNs, as demonstrated in FIG0. We address this problem with a novel and simple oscillating weightings of the losses, and will show in Sec. 3 that our small anytime models with near-optimal final predictions can effectively speed up two times large ones without them, on multiple data-sets, including ILSVRC BID11, and on multiple models, including the very recent Multi-ScaleDenseNets (MSDnets) BID5. Observing that the proposed training techniques lead to ANNs that are near-optimal in late predictions but are not as accurate in the early predictions, we assemble ANNs of exponentially increasing depths to dedicate early predictions to smaller networks, while only delaying large networks by a constant fraction of additional test-time budgets. As illustrated in FIG0, given a sample (x, y) ∼ D, the initial feature map x 0 is set to x, and the subsequent feature transformations f 1, f 2,..., f L generate a sequence of intermediate features DISPLAYFORM0 Each feature map x i can then produce an auxiliary predictionŷ i using a prediction layer g i:ŷ i = g i (x i ; w i) with parameter w i. Each auxiliary predictionŷ i then incurs an expected DISPLAYFORM1. We call such an augmented network as an Anytime Neural Network (ANN). Let the parameters of the full ANN be θ = (DISPLAYFORM2 The most common way to optimize these losses, 1, ..., L, end-to-end is to optimize them in a weighted sum min θ L i=1 B i i (θ), where {B i} i form the weight scheme for the losses. Alternating SIEVE weights. Three experimental observations lead to our proposed SIEVE weight scheme. First, the existing weights, CONST BID10 BID5, and LINEAR both incur more than 10% relative increase in final test errors, which effectively slow down anytime models multiple times. Second, we found that a large weight can improve a neighborhood of losses thanks to the high correlation among neighboring losses. Finally, keeping a fixed weighting may lead to solutions where the sum of the gradients are zero, but the individual gradients are non-zero. The proposed SIEVE scheme has half of the total weights in the final loss, so that the final gradient can outweigh other gradients when all loss gradients have equal two-norms. It also have uneven weights in early losses to let as many losses to be near large weights as possible. Formally for L losses, we first add to B L 2 one unit of weight, where • means rounding. We then add one unit to each B kL 4 for k = 1, 2, 3, and then to each B kL 8 for k = 1, 2,..., 7, and so on, until all predictors have non-zero weights. We finally normalize B i so that DISPLAYFORM3, and set B L = 1. During each training iteration, we also sample proportional to the B i a layer i, and add temporarily to the total loss B L i so as to oscillate the weights to avoid spurious solutions. We call ANNs with alternating weights as alternating ANNs (AANNs). Though the proposed techinques are heuristics, they effectively speed up anytime models multiple times as shown in Sec. 3. We hope our experimental can inspire, and set baselines for, future principled approaches. EANN. Since AANNs put high weights in the final layer, they trade early accuracy for the late ones. We leverage this effect to improving early predictions of large ANNs: we propose to form a sequence of ANNs whose depths grow exponentially (EANN). By dedicating early predictions to small networks, EANN can achieve better early . Furthermore, if the largest model has L depths, we only compute log L small networks before the final one, and the total cost of the small networks is only a constant fraction of the final one. Hence, we only consume a constant fraction of additional test-time budget. FIG0 how an EANN of the exponential base b = 2 works at test-time. The EANN sequentially computes the ANNs, and only outputs an anytime if the current is better than previous ones in validation. Formally, if we assume that each ANN has near-optimal after 1 b of its layers, then we can prove that for any budget B, the EANN can We present two key : small anytime models with SIEVE can outperform large ones with CONST, and EANNs can improve early accuracy, but cost a constant fraction of extra budgets. SIEVE vs. CONST of double costs. In FIG1 and FIG1, we compare SIEVE and CONST on ANNs that are based on ResNets on CIFAR100 BID7 ) and ILSVRC BID11. The networks with CONST have double the depths as those with SIEVE. We observe that SIEVE leads to the same final error rates as CONST of double the costs, but does so much faster. The two schemes also have similar early performance. Hence, SIEVE effectively speed up the predictions of CONST by about two times. In FIG1, we experiment with the very recent Multi-Scale-DenseNets (MSDNets) BID5, which are specifically modified from the recently popular DenseNets BID6 to produce the state-of-the-art anytime predictions. We again observe that by improving the final anytime prediction of the smallest MSDNet26 without sacrificing too much early predictions, we make MSDNet26 effectively a sped-up version of MSDNet36 and MSDNet41.EANN vs. ANNs and OPT. In FIG1, we assemble ResNet-ANNs of 45, 81 and 153 conv layers to form EANNs. We compare the EANNs against the parallel OPT, which is from running regular networks of various depths in parallel. We observe that EANNs are able to significantly reduce the early errors of ANNs, but reach the final error rate later. Furthermore, ANNs with more accurate final predictions using SIEVE and EXP-LIN 2 are able to outperform CONST and LINEAR, since whenever an ANN completes in an EANN, the final is the best one for a long period of time.
By focusing more on the final predictions in anytime predictors (such as the very recent Multi-Scale-DenseNets), we make small anytime models to outperform large ones that don't have such focus.
978
scitldr
Computational imaging systems jointly design computation and hardware to retrieve information which is not traditionally accessible with standard imaging systems. Recently, critical aspects such as experimental design and image priors are optimized through deep neural networks formed by the unrolled iterations of classical physics-based reconstructions (termed physics-based networks). However, for real-world large-scale systems, computing gradients via backpropagation restricts learning due to memory limitations of graphical processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging. We demonstrate our methods practicality on two large-scale systems: super-resolution optical microscopy and multi-channel magnetic resonance imaging. Computational imaging systems (tomographic systems, computational optics, magnetic resonance imaging, to name a few) jointly design software and hardware to retrieve information which is not traditionally accessible on standard imaging systems. Generally, such systems are characterized by how the information is encoded (forward process) and decoded (inverse problem) from the measurements. The decoding process is typically iterative in nature, alternating between enforcing data consistency and image prior knowledge. Recent work has demonstrated the ability to optimize computational imaging systems by unrolling the iterative decoding process to form a differentiable Physicsbased Network (PbN) (1; 2; 3) and then relying on a dataset and training to learn the system's design parameters, e.g. experimental design (3; 4; 5), image prior model (1; 2; 6; 7). PbNs are constructed from the operations of reconstruction, e.g. proximal gradient descent algorithm. By including known structures and quantities, such as the forward model, gradient, and proximal updates, PbNs can be efficiently parameterized by only a few learnable variables, thereby enabling an efficient use of training data while still retaining robustness associated with conventional physics-based inverse problems. Training PbNs relies on gradient-based updates computed using backpropagation (an implementation of reverse-mode differentiation ). Most modern imaging systems seek to decode ever-larger growing quantities of information (gigabytes to terabytes) and as this grows, memory required to perform backpropagation is limited by the memory capacity of modern graphical processing units (GPUs). Methods to save memory during backpropagation (e.g. forward recalculation, reverse recalculation, and checkpointing) trade off spatial and temporal complexity. For a PbN with N layers, standard backpropagation achieves O(N) temporal and spatial complexity. Forward recalculation achieves O memory complexity, but has to recalculate unstored variables forward from the input of the network when needed, yielding O(N 2) temporal complexity. Forward checkpointing smoothly trades off temporal, O(N K), and spatial, O(N/K), complexity by saving variables every K layers and forward-recalculating unstored variables from the closest checkpoint. Reverse recalculation provides a practical solution to beat the trade off between spatial vs. temporal complexity by calculating unstored variables in reverse from the output of the network, yielding O(N) temporal and O spatial complexities. Recently, several reversibility schemes have been proposed for residual networks, learning ordinary differential equations, and other specialized network architectures (11; 12). In this work, we propose a memory-efficient learning procedure for backpropagation for the PbN formed from proximal gradient descent, thereby enabling learning for many large-scale computational imaging systems. Based on the concept of invertibility and reverse recalculation, we detail how backpropagation can be performed without the need to store intermediate variables for networks composed of gradient and proximal layers. We highlight practical restrictions on the layers and introduce a hybrid scheme that combines our reverse recalculation methods with checkpointing to mitigate numerical error accumulation. Finally, we demonstrate our method's usefulness to learn the design for two practical large-scale computational imaging systems: superresolution optical microscopy (Fourier Ptychography) and multi-channel magnetic resonance imaging. Computational imaging systems are described by how sought information is encoded to and decoded from a set of measurements. The encoding of information, x into measurements, y, is given by where A is the forward model that characterizes the measurement system physics and n is random system noise. The forward model is a continuous process, but is often approximated by a discrete representation. The retrieval of information from a set of measurements, i.e. decoding, is commonly structured using an inverse problem formulation, where D(·) is a data fidelity penalty and P(·) is a prior penalty. When n is governed by a known noise model, the data consistency penalty can be written as the negative log-likelihood of the appropriate distribution. When P(·) is a non-smooth prior (e.g. 1, total variation), proximal gradient descent (PGD) and its accelerated variants are often efficient algorithms to minimize the objective in Eq. 2 and are composed of the following alternating steps: where α is the gradient step size, ∇ x is the gradient operator, prox P is a proximal function that enforces the prior, and x (k) and z (k) are intermediate variables for the k th iteration. The structure of the PbN is determined by unrolling N iterations of the optimizer to form the N layers of a network (Eq. 3 and Eq. 4 form a single layer). Specifically, the input to the network is the initialization of the optimization, x, and the output is the ant, x (N). The learnable parameters are optimized using gradient-based methods. Common machine learning toolboxes' (e.g. PyTorch, Tensor Flow, Caffe) auto-differentiation functionalities are used to compute gradients for backpropagation. Auto-differentiation accomplishes this by creating a graph composed of the PbN's operations and storing intermediate variables in memory. Our main contribution is to improve the spatial complexity of backpropagation for PbNs by treating the larger single graph for auto-differentiation as a series of smaller graphs. Specifically, consider a PbN, F, composed of a sequence of layers, where x (k) and x (k+1) are the k th layer input and output, respectively, and θ (k) are its learnable parameters. When performing reverse-mode differentiation, our method treats a PbN of N layers as N separate smaller graphs, processed one at a time, rather than as a single large graph, thereby saving a factor N in memory. As outlined in Alg. 1, we first recalculate the current layer's input, inverse, and then form one of the smaller graphs by recomputing the output of the layer, v (k), from the recalculated input. To compute gradients, we then rely on auto-differentiation of each layer's smaller graph to compute the gradient of the loss, L, with respect to The procedure is repeated for all N layers in reverse order. Algorithm 1 Memory-efficient learning for physics-based networks 1: procedure MEMORY-EFFICIENT BACKPROPAGA- for k > 0 do 4: end for 10: 11: end procedure In order to perform the reverse-mode differentiation efficiently, we must be able to compute each layer's inverse operation, inverse. The remainder of this section overviews the procedures to invert gradient and proximal update layers. A common interpretation of gradient descent is as a forward Euler discretization of a continuous-time ordinary differential equation. As a consequence, the inverse of the gradient step layer (Eq. 3) can be viewed as a backward Euler step, This implicit equation can be solved iteratively via the backward Euler method using the fixed point algorithm (Alg. 2). Convergence is guaranteed if where Lip(·) computes the Lipschitz constant of its argument. In the setting when D(x; y) = Ax − y 2 and A is linear this can be ensured if α <, where σ max (·) computes the largest singular value of its argument. Finally, as given by Banach Fixed Point Theorem, the fixed point algorithm (Alg. 2) will have an exponential rate of convergence. Algorithm 2 Inverse for gradient layer 1: procedure FIXED POINT METHOD(z, T) 2: x ← z 3: for t < T do 4: x ← z + α∇ x D(x; y) 5: end for return x 8: end procedure The proximal update (Eq. 4) is defined by the following optimization problem: For differentiable P(·), the optimum of which is, In contrast to the gradient update layer, the proximal update layer can be thought of as a backward Euler step. This allows its inverse to be expressed as a forward Euler step, when the proximal function is bijective (e.g. prox). If the proximal function is not bijective (e.g. prox 1) the inversion is not straight forward. However, in many cases it is possible to substitute it with a bijective function with similar behavior. Reverse recalculation of the unstored variables is non-exact as the operations to calculate the variables are not identical to forward calculation. The is numerical error between the original forward and reverse calculated variables and as more iterations are unrolled, numerical error can accumulate. To mitigate these effects, some of the intermediate variables can be stored from forward calculation, referred to as checkpoints. Memory permitting, as many checkpoints should be stored as possible to ensure accuracy while performing reverse recalculation. While most PbNs cannot afford to store all variables required for reverse-mode differentiation, it is often possible to store a few. Standard bright-field microscopy offers a versatile system to image in vitro biological samples, however, is restricted to imaging either a large field of view or a high resolution. Fourier Ptychographic Microscopy (FPM) is a super resolution (SR) method that can create gigapixel-scale images beating this trade off on a standard optical microscope by acquiring a series of measurements (up to hundreds) under various illumination settings on an LED array microscopy and combining them via a phase retrieval based optimization. The system's dependence on many measurements inhibits its ability to image live fast-moving biology. Reducing the number of measurements is possible using linear multiplexing and state of the art performance is achieved by forming a PbN and learning its experimental design (4; 3), however, is currently limited in scale due to GPU memory constraints (terabyte-scale memory is required for learning the full measurement system). With our proposed memory-efficient learning framework, we reduce the required memory to only a few gigabytes, thereby enabling the use of consumer-grade GPU hardware. To evaluate accuracy we compare standard learning with our proposed memory-efficient learning on a problem that fits in standard GPU memory. We reproduce in where the number of measurements are reduced by a factor of 10 using 6.26GB of memory using only 0.627GB and time is only increased by a factor of 2. To perform memory-efficient learning, we set T = 4 and checkpoint every 10 unrolled iterations. The testing loss between our method and standard learning are comparable (Fig. 1a). In addition, we qualitatively highlight equivalence of the two methods, displaying SR reconstructions with learned design using standard (Fig. 1d) and memory-efficient (Fig. 1e) methods. For relative comparison, we display a single low resolution measurement (Fig. 1b) and the ground truth SR reconstruction using all measurements (Fig. 1c). MRI is a powerful Fourier-based medical imaging modality that non-invasively captures rich biophysical information without ionizing radiation. Since MRI acquisition time is directly proportional to the number of acquired measurements, reducing measurements leads to immediate impact on patient throughput and enables capturing fast-changing physiological dynamics. Multi-channel MRI is the standard of care in clinical systems and uses multiple receive coils distributed around the body to acquire measurements in parallel, thereby reducing the total number of required acquisition frames for decoding. By additionally modifying the measurement pattern to take advantage of image prior knowledge, e.g. through compressed sensing, it is possible to dramatically reduce scan times. As with experimental design, PbNs with learned deep image priors have demonstrated state-of-the-art performance for multi-channel MRI (20; 6), but are limited in network size and number of unrolled iterations due to memory required for training. Our memory-efficient learning reduces memory footprint at training time, thereby enabling learning for larger problems. To evaluate our proposed memory-efficient learning, we reproduce the in for the "SD-ET-WD" PbN, which is equivalent to PGD (10 unrolled iterations) where the proximal update is replaced with a learned invertible residual convolutional neural network (RCNN) (21; 11; 9). We compare training with full backpropagation, requiring 10.77GB of memory and 3:50 hours, versus memory-efficient learning, requiring 2.11GB and 8:25 hours. We set T = 6 and do not use checkpointing. As Fig. 2 shows, the training loss is comparable across epochs, and inference are similar on one image in the training set, with normalized root mean-squared error of 0.03 between conventional and memory-efficient learning. Discussion: Our proposed memory-efficient learning opens the door to applications that are not otherwise possible to train due to GPU memory constraints, without a large increase in training time. While we specialized the procedure to PGD networks, similar approaches can be taken to invert other PbNs with more complex subroutines such as solving linear systems of equations. However, sufficient conditions for invertibility must be met. This limitation is clear in the case of a gradient descent block with an evolving step size, as the Lipschitz constant may no longer satisfy Eq. 7. Furthermore, the convergent behavior of optimization to minima makes accurate reverse recalculation of unstored variables severely ill-posed and can cause numerical error accumulation. Checkpoints can be used to improve the accuracy of reverse recalculated variables, though most PbN are not deep enough for numerical convergence to occur. In this communication, we presented a practical memory-efficient learning method for large-scale computational imaging problems without dramatically increasing training time. Using the concept of reversibility, we implemented reverse-mode differentiation with favorable spatial and temporal complexities. We demonstrated our method on two representative applications: SR optical microscopy and multi-channel MRI. We expect other computational imaging systems to nicely fall within our framework.
We propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging.
979
scitldr
Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments. This paper proposes reward distribution using {\em Neuron as an Agent} (NaaA) in MARL without a TTP with two key ideas: (i) inter-agent reward distribution and (ii) auction theory. Auction theory is introduced because inter-agent reward distribution is insufficient for optimization. Agents in NaaA maximize their profits (the difference between reward and cost) and, as a theoretical , the auction mechanism is shown to have agents autonomously evaluate counterfactual returns as the values of other agents. NaaA enables representation trades in peer-to-peer environments, ultimately regarding unit in neural networks as agents. Finally, numerical experiments (a single-agent environment from OpenAI Gym and a multi-agent environment from ViZDoom) confirm that NaaA framework optimization leads to better performance in reinforcement learning. To the best of our knowledge, no existing literature discusses reward distributions in the configuration described above. Because CommNet assumes an environment that distributes a uniform reward to all the agents, if the distributed reward is in limited supply (such as money), it causes the Tragedy of the Commons BID15, where the reward of contributing agents will be reduced due to the participation of free riders. Although there are several MARL methods for distributing rewards ac- BID0 BID23 BID6 BID7. They should suppose TTP to distribute the optimal reward to the agents. (b) Inter-agent reward distribution model (our model). Some agents receive reward from the environment directly, and redistribute to other agents. The idea to determine the optimal reward without TTP is playing auction game among the agents.cording to agents' contribution such as QUICR BID0 and COMA BID23, they suppose the existence of TTP and hence cannot be applied to the situation investigated here. The proposed method, Neuron as an Agent (NaaA), extends CommNet to actualize reward distributions in MARL without TTP based on two key ideas: (i) inter-agent reward distribution and (ii) auction theory. Auction theory was introduced because inter-agent reward distributions were insufficient for optimization. Agents in NaaA maximize profit, the difference between their received rewards and the costs which they redistribute to other agents. If the framework is naively optimized, a trivial solution is obtained where agents reduce their costs to zero to maximize profits. Then, NaaA employs the auction theory in game design to prevent costs from dropping below their necessary level. As a theoretical , we show that agents autonomously evaluate the counterfactual return as values of other agents. The counterfactual return is equal to the discounted cumulative sum of counterfactual reward BID0 distributed by QUICR and COMA.NaaA enables representation trades in peer-to-peer environments and, ultimately, regards neural network units as agents. As NaaA is capable of regarding units as agents without losing generality, this setting was utilized in the current study. The concept of the proposed method is illustrated in FIG0.An environment extending ViZDoom BID11, a POMDP environment, to MARL was used for the experiment. Two agents, a cameraman sending information and a main player defeating enemies with a gun, were placed in the environment. Results confirmed that the cameraman learned cooperative actions for sending information from dead angles (behind the main player) and outperformed CommNet in score. Interestingly, NaaA can apply to single-and multi-agent settings, since it learns optimal topology between the units. Adaptive DropConnect (ADC), which combines DropConnect (randomly masking topology) with an adaptive algorithm (which has a higher probability of pruning connections with lower counterfactual returns) was proposed as a further application for NaaA. Experimental classification and reinforcement learning task showed ADC outperformed DropConnect. The remainder of this paper is organized as follows. In the next section, we show the problem setting. Then, we show proposed method with two key ideas: inter-agent reward distribution and auction theory in Section 3. After related works are introduced in Section 4, the experimental are shown in classification, single-agent RL and MARL in Section 5. Finally, a ends the paper. Suppose there is an N -agent system in an environment. The goal of this paper was to maximize the discounted cumulative reward the system obtained from the environment. This was calculated as: DISPLAYFORM0 where R ex t is a reward which the system obtains from the environment at t, and γ ∈ is the discount rate and T is the terminal time. Reward distribution is distributing R t to all the agents under the following constraint. DISPLAYFORM1 where R it is a reward which is distributed to i-th agent at time t. For instance, in robot soccer, the environment give a reward 1 when an agent shoot a ball to the goal. Each agent should receive the reward along to their contribution. In most of MARL communication methods, the policy of reward distribution is determined by a centralized agent. For example, QUICR BID0 and COMA BID7 distribute R it according to counterfactal reward, difference of reward between an agent made an action and not. The value of counterfactual reward is calculated by centralized agent, called trusted third party (TTP).In a peer-to-peer environment such as inter-industry and -country trade, they cannot place a TTP. Hence, another framework required to actualize reward distribution without TTP. The proposed method, Neuron as an Agent (NaaA), extends CommNet BID23 to actualize reward distributions in MARL without TTP based on two key ideas: (i) inter-agent reward distribution and (ii) auction theory. As we show in Section 3.3, NaaA actualizes that we can regard even a unit as an agent. Some agents receive rewards from the environment directly R ex it, and they distribute these to other agents as incentives for giving precise information. Rewards are limited, so if an agent distributes ρ rewards, the reward of that agents is reduced by ρ to satisfy the constraint of Eq:. For this reason, agents other than a specified agent of interest can be considered a secondary environment for the agent giving rewards of −ρ instead of an observation x. This secondary environment was termed the internal environment, whereas the original environment was called the external environment. Similarly to CommNet BID23, the communication protocol between agents was assumed to be a continuous quantity (such as a vector), the content of which could be trained by backpropagation. A communication network among the agents is represented as a directed graph G = (V, E) between agents, where V = {v 1, . . ., v N} is a set of the agents and E ⊂ V 2 is a set of edges representing the connections between two agents. If (v i, v j) ∈ E, then connection v i → v j holds, indicating that v j observes the representation of v i. Here, the representation of agent v i at time t was denoted as x it ∈ R. Additionally, the set of agents that agent i connects to was designated to be N out i = {j|(v i, v j) ∈ E} and the set of agents that agent i is connected from was DISPLAYFORM0 The following assumptions were added to the v i characteristics: N1: (Selfishness) The utility each agent v i wants to maximize is its own return (cumulative discounted reward): DISPLAYFORM1 N2: (Conservation) The summation of internal rewards over all V equals 0. Hence, the summation of rewards which V (receive both internal and external environment R it) are equivalent to the reward R ex t, which the entire multi-agent system receives from the external environment: DISPLAYFORM2 representation signal x i before transferring this signal to the agent. Simultaneously, ρ jit will be subtracted from the reward of v j. N4: (NOOP) v i can select NOOP (no operation), for which the return is δ > 0, as an action. In NOOP, the agent inputs and outputs nothing. The social welfare function (total utility of the agents) G all is equivalent to the objective function G. That is, DISPLAYFORM3 From N2, G all = G holds. From N3, the reward R it received by v i at t can be written as: DISPLAYFORM0 which can be divided into positive and negative terms, where the former is defined as revenue, and the latter as cost. These are respectively denoted as DISPLAYFORM1 Here, R it represents profit, difference between revenue and cost. The agent v i maximizes its cumulative discounted profit, G it, represented as: DISPLAYFORM2 G it could not be observed until the end of an episode (the final time). Because predictions based on current values were needed to select optimal actions, G it was approximated with the value function DISPLAYFORM3 where s it is a observation for i-th agent at time t. Under these conditions, the following equation holds: DISPLAYFORM4 Thus, herein, we only consider maximization of revenue, the value function, and cost minimization. The inequality R it > 0 (i.e., r it > c it) indicates that the agent in question gave additional value to the obtained data. The agent selected the NOOP action because DISPLAYFORM5 If we directly optimize Eq:, a trivial solution is obtained in which the internal rewards converge at 0, and all agents (excepting agents which directly receive reward from the external environment) select NOOP as their action. This phenomenon occurs regardless of the network topology G, as no nodes are incentivized to send payments ρ ijt to other agents. With this in mind, multi-agent systems must select actions with no information, achieving the equivalent of taking random actions. For that reason, the total external reward R ex t shrinks markedly. This phenomenon also known as social dilemma in MARL, which is caused from a problem that each agent does not evaluate other agents' value truthfully to maximize their own profit. We are trying to solve this problem with auction theory in Section 3.2. To make the agents to evaluate other agents' value truthfully, the proposed objective function borrows its idea from the digital goods auction theory BID10. In general, an auction theory is a part of mechanism design intended to unveil the true price of goods. Digital goods auctions are one mechanism developed from auction theory, specifically targeting goods that may be copied without cost such as digital books and music. Although several variations of digital goods auctions exist, an envy-free auction BID10 was used here because it required only a simple assumption: equivalent goods have a single simultaneous price. In NaaA, this can be represented by the following assumption: DISPLAYFORM0 The assumption above indicates that ρ jit takes either 0 or a positive value, depending on i at an equal selected time t. Therefore, the positive side was named v i's price, and denoted as q it.The envy-free auction process is shown in the left section of FIG2, displaying the negotiation process between one agent sending an representation (defined as a seller), and a group of agents buying the representation (defined as buyers). First, a buyer places a bid with the agent at a bidding price b jit. Next, the seller selects the optimal priceq it and allocates the representation. Payment occurs if b ijt exceeds q jt. In this case, ρ jit = H(b jit − q it)q it holds where H(·) is a step function. For this transaction, the definition g jit = H(b jit − q it) holds, and is named allocation. After allocation, buyers perform payment: ρ jit = g jitqit. The seller sends the representation x i only to the allocated buyers. Buyers who do not receive the representation approximate DISPLAYFORM1 This negotiation is performed at each time step in reinforcement learning. The sections below discuss the revenue, cost, and value functions based on Eq:.Revenue: The revenue of an agent is given as DISPLAYFORM0 where DISPLAYFORM1 g jit is the demand, the number of agents for which the bidding price b jit is greater than or equal to q it. Because R ex i is independent of q it, the optimal priceq it maximizing r it is given as:q DISPLAYFORM2 The r it curve is shown on the right side of FIG2.Cost: Cost is defined as an internal reward that one agent pays to other agents. It is represented as: DISPLAYFORM3 where The effects of the value function were considered for both successful and unsuccessful v j purchasing cases. The value function was approximated as a linear function g it: DISPLAYFORM4 DISPLAYFORM5 where o it is equivalent to the cumulative discount value of the counterfactual reward BID0, it was named counterfactual return. As V 0 it (a constant independent of g it) is equal to the value function when v i takes an action without observing x 1,..., x N.The optimization problem is, therefore, presented below using a state-action value function for i-th where DISPLAYFORM6 DISPLAYFORM7 was taken because the asking priceq t was unknown for v i, except whenq it and g iit = 0.Then, to identify the bidding price that b it maximizes returns, the following theorem holds. This proof is shown in the Appendix A.This implies agents should only consider their counterfactual returns! When γ = 0 it is equivalent to a case without auction. Hence, the bidding value is raised if each agent considers their long-time rewards. Consequently, when the NaaA mechanism is used agents behave as if performing valuation for other agents, and declare values truthfully. Under these conditions, the following corollary holds:Corollary 3.1. The Nash equilibrium of an envy-free auction is DISPLAYFORM0 The remaining problem is how to predict o t. Q-learning was used to predict o t in this paper as the same way as QUICR BID0. As o it represented the difference between two Qs, each Q was approximated. The state was parameterized using the vector s t, which contained input and weight. The ϵ-greedy policy with Q-learning typically supposed that discrete actions Thus the allocation g ijt was employed as an action rather than b it and q it.Algorithm The overall algorithm is shown in Algorithm 1. One benefit of NaaA is that it can be used not only for MARL, but also for network training. Typical neural network training algorithms such as RMSProp and Adam BID13 are based on sequential algorithms such as the stochastic gradient descent (SGD). Therefore, the problem they solve can be interpreted as a problem of updating a state (i.e., weight) to a goal (the minimization of the expected likelihood). Learning can be accelerated by applying NaaA to the optimizer. In this paper, the application of NaaA to SGD was named Adaptive DropConnect (ADC), the finalization of which can be interpreted as a combination of DropConnect and Adaptive DropOut BID2. In the subsequent section, ADC is introduced as a potential NaaA application. Algorithm 1 NaaA: inter-agent reward distribution with envy-free auction 1: for t = 1 to T do 2:Compute a bidding price for every edge: DISPLAYFORM0 Compute an asking price for every node: DISPLAYFORM1 qd it (q). for DISPLAYFORM0 Compute allocation: DISPLAYFORM1 Compute the price the agent should pay: ρ jit ← g jitqit 7:end for 8:Make a payment: DISPLAYFORM2 Make a shipment: DISPLAYFORM3 for v i ∈ V do Compute a bidding price for every edge: DISPLAYFORM4 Compute an asking price for every node: DISPLAYFORM5 qd it (q). for DISPLAYFORM0 Compute allocation: DISPLAYFORM1 end for Sample a switching matrix U t from a Bernoulli distribution: DISPLAYFORM0 Sample the random mask M t from a Bernoulli distribution: DISPLAYFORM1 Generate the adaptive mask: DISPLAYFORM2 Compute h t for making a shipment: DISPLAYFORM3 Update W t and b t by backpropagation. 12: end for ADC uses NaaA for supervised optimization problems with multiple revisions. In such problems, the first step is the presentation of an input state (such as an image) by the environment. Agents are expected to update their parameters to maximize the rewards presented by a criterion calculator. Criterion calculators gives batch-likelihoods to agents, representing rewards. Each agent, a classifier, updates its weights to maximize the reward from the criterion calculator. These weights are recorded as an internal state. A heuristic utilizing the absolute value of weight |w ijt | (the technique used by Adaptive DropOut) was applied as the counterfactual return o ijt. The absolute value of weights was used because it represented the updated amounts for which the magnitude of error of unit outputs was proportional to |w ijt |.This algorithm is presented as Algorithm 2. Because the algorithm is quite simple, it can be easily implemented and, thus, applied to most general deep learning problems such as image recognition, sound recognition, and even deep reinforcement learning. Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments. R/DIAL BID6 ) is a communication method for deep reinforcement learning, which train the optimal communication among the agent with Q-learning. It focuses on that paradigm of centralized planning. CommNet BID23, which exploits the characteristics of a unit that is agnostic to the topology of other units, employs backpropagation to train multi-agent communication. Instead of reward R(a t) of an agent i for actions at t a t, QUICR-learning BID0 ) maximizes counterfactual reward R(a t) − R(a t − a it), the difference in the case of the agent i takes an action a it (a t) and not (a t − a it). COMA BID7 ) also maximizes counterfactual rewards in an actor-critic setting. CommNet, QUICR and COMA have a centralized environment for distributing rewards through a TTP. In contrast, NaaA does not rely on a TTP, and hence, each agent calculates its reward. While inter-agent reward distribution has not been considered in the context of communication, trading agents have been considered in other contexts. Trading agent competition (TACs), competitions for trading agent design, have been held in various locations regarding topics such as smart grids BID12, wholesale BID14, and supply chains BID19, yielding innumerable trading algorithms such as Tesauro's bidding algorithm BID26 and TacTex'13 . Since several competitions employed an auction as optimal price determination mechanism (; BID22, using auctions to determine optimal prices is now a natural approach. Unfortunately, these existing methods cannot be applied to the present situation. First, their agents did not communicate because the typical purpose of a TAC is to create market competition between agents in a zero-sum game. Secondly, the traded goods are not digital goods but instead goods in limited supply, such as power and tariffs. Hence, this is the first paper to introduce inter-agent reward distribution to MARL communications. Auction theory is discussed in terms of mechanism design BID17, also known as inverse game theory. Second-price auctions are auctions including a single product and several buyers. In this paper, a digital goods auction BID10 was used as an auction with an infinite supply. Several methods extend digital goods auction to address collusion, including the consensus estimate BID8 and random sample auction BID9, which can be used to improve our method. This paper is also related to DropConnect in terms of controlling connections between units. Adaptive DropConnect (ADC), proposed in a later section of this paper as a further application, extends the DropConnect regularization technique. The finalized idea of ADC (which uses a skew probability correlated to the absolute value of weights rather than dropping each connection between units by a constant probability) is closer to Adaptive DropOut BID2, although their derivation differs. The adjective "adaptive" is added with respect to the method. Neural network optimizing using RL was investigated by BID1; however, their methods used a recurrent neural network (RNN) and are therefore difficult to implement, whereas the proposed method is RNN-free and forms as a layer. For these reasons, its implementation is simple and fast and it also has a wide area of applicability. To confirm that NaaA works widely with machine learning tasks, we confirm our method of supervised learning tasks as well as reinforcement learning tasks. As supervised learning tasks, we use typical machine learning tasks such as image classification using MNIST, CIFAR-10, and SVHN.As reinforcement tasks, we confirm single-and multi-agent environment. The single-agent environment is from OpenAI Gym. We confirm the using a simple reinforcement task: CartPole. In multi-agent, we use ViZDoom, a 3D environment for reinforcement learning. For classification, three types of datasets were used: MNIST, CIFAR-10, and STL-10. The given task was to predict the label of each image, and each dataset had a class number of 10. The first dataset, MNIST, was a collection of black and white images of handwritten digits sized 28 28. The training and test sets contained 60,000 and 10,000 example images, respectively. The CIFAR-10 dataset images were colored and sized 32 32, and the assigned task was to predict what was shown in each picture. This dataset contained 6,000 images per class (5,000 for training and 1,000 for testing). The STL-10 dataset was used for image recognition, and had 1,300 images for each class (500 training, 800 testing). Each image was sized 96 96; however, for the experiment, the images were resized to 48 48 because the greater resolution of this dataset (relative to the above datasets) required far more computing time and resources. Two models were compared in this experiment: DropConnect and Adaptive DropConnect (the model proposed in this paper). The baseline model was composed of two convolutional layers and two fully connected layers whose outputs are dropped out (we set the possibility as 0.5). The labels of input data were predicted using log-softmaxed values from the last fully connected layer. In the DropConnect and Adaptive DropConnect models, the first fully connected layer was replaced by a DropConnected and Adaptive DropConnected layer, respectively. It should be noted that the DropConnect model corresponded to the proposed method when ε = 1.0, meaning agents did not perform their auctions but instead randomly masked their weights. The models were trained over ten epochs using the MNIST datasets, and were then evaluated using the test data. The CIFAR-10 and STL-10 epoch numbers were 20 and 40, respectively. Experiments were repeated 20 times for each condition, and the averages and standard deviations of error rates were calculated. Results are shown in TAB1. As expected, the Adaptive DropConnect model performed with a lower classification error rate than either the baseline or DropConnect models regardless of the given experimental datasets. Next, the single-agent reinforcement learning task was set as the CartPole task from OpenAI Gym BID3 with visual inputs. In this setting, the agent was required to balance a pole while moving a cart. The images contained a large amount of non-useful information, making pixel pruning important. The in TAB1 demonstrates that our method improves the standard RL. The proposed reward distribution method was confirmed to work as expected by a validation experiment using the multi-agent setting in ViZDoom BID11, an emulator of Doom containing a map editor where additional agents complement the main player. A main player in the ViZDoom environment aims to seek the enemy in the map and then defeat the enemy. A defend the center (DtC)-based scenario, provided by ViZDoom platform, was used for this experiment. Two players, a main player and a cameraman, were placed in the DtC, where they started in the center of a circular field and then attacked enemies that came from the surrounding wall. Although the main player could attack the enemy with bullets, the cameraman had no way to attack, only scouting for the enemy. The action space for the main player was the combination of {attack, turn left, turn right}, giving a total number of actions 2 3 = 8. The cameraman had two possible actions: {turn left, turn right}. Although the players could change direction, they could not move on the field. Enemies died after receiving one attack (bullet) from the main player, and then player received a score of +1 for each successful attack. The main player received 26 bullets by default at the beginning of each episode. The main player died if they received attacks from the enemy to the extent that their health dropped to 0, and received a score of -1 for each death. The cameraman did not die if attacked by an enemy. Episodes terminated either when the maim player died or after 525 steps elapsed. Figure 4: NaaA leads agents to enter a cooperative relationship. First, the two agents face different directions, and the cameraman sells their information to the main player. The main player (information buyer) starts to turn right to find the enemy. The cameraman (information seller) starts to turn left to seek new information by finding the blind area of the main player (2 and 3). After turning, the main player attacks the first, having already identified enemy (4 and 5). Once the main player finds the enemy, he attacks and obtains the reward (6 and 7). Both agents then return to watching the dead area of the other until the next enemy appears. Three models, described below, were compared: the proposed method and two comparison targets. Baseline: DQN without communication. The main player learned standard DQN with the perspective that the player is viewing. Because the cameraman did not learn, this player continued to move randomly. Comm: DQN with communication, inspired by Commnet. The main player learns DQN with two perspectives: theirs and that of the cameraman. The communication vector is learned with a feedforward neural network. NaaA: The proposed method. The main player learned DQN with two perspectives: theirs and that of the cameraman. Transmissions of rewards and communications were performed using the proposed method. Training was performed over the course of 10 million steps. FIG3 Left demonstrates the proposed NaaA model outperformed the other two methods. Improvement was achieved by Adaptive DropConnect. It was confirmed that the cameraman observed the enemy through an episode, which could be interpreted as the cameraman reporting enemy positions. In addition to seeing the enemy, the cameraman observed the area behind the main player several times. This enabled the cameraman to observe enemy attacks while taking a better relative position. To further interpret this , a heatmap visualization of revenue earned by the agent is presented in FIG3 Right. The picture is a screen from Doom, recorded at the moment when the CNN filter was most activated. Figure 4 shows an example of learnt sequence of actions by our method. This paper proposed a NaaA model to address communication in MARL without a TTP based on two key ideas: inter-agent reward distribution and auction theory. Existing MARL communication methods have assumed the existence of a TTP, and hence could not be applied in peer-to-peer environments. The inter-agent reward distribution, making agents redistribute the rewards they received from the internal/external environment, was reviewed first. When an envy-free auction was introduced using auction theory, it was shown that agents would evaluate the counterfactual returns of other agents. The experimental demonstrated that NaaA outperformed a baseline method and a CommNet-based method. Furthermore, a Q-learning based algorithm, termed Adaptive DropConnect, was proposed to dynamically optimize neural network topology with counterfactual return evaluation as a further application. To evaluate this application, experiments were performed based on a single-agent platform, demonstrating that the proposed method produced improved experimental relative to existing methods. Future research may also be directed toward considering the connection between NaaA and neuroscience or neuroevolution. Edeleman propounded the concept of neural Darwinism BID5, in which group selection occurs in the brain. Inter-agent rewards, which were assumed in this paper, correspond to NTFs and could be used as a fitness function in genetic algorithms for neuroevolution such as hyperparameter tuning. As NaaA can be applied in peer-to-peer environments, the implementation of NaaA in blockchain BID24 is under consideration. This implementation would extend the areas where deep reinforcement learning could be applied. Bitcoin BID18 could be used for inter-agent reward distribution, and the auction mechanism could be implemented by smart contracts BID4. Using the NaaA reward design, it is hoped that the world may be united, allowing people to share their own representations on a global scale. The optimization problem in Eq:11 is made of two terms except of the constant, and the only second term is depends on b. Hence, we consider to optimize the second term. The optimal bidding priceŝ q t is given by the following equation. DISPLAYFORM0 From independence, the equation is solved if we solve the following problem. DISPLAYFORM1 Hence,b ijt can be derived as the solution which satisfies the following equation. DISPLAYFORM2 For simplicity, we let q = q jt and o = o ij,t+1. Then, the following equation holds. DISPLAYFORM3 DISPLAYFORM4
Neuron as an Agent (NaaA) enable us to train multi-agent communication without a trusted third party.
980
scitldr
Increasing model size when pretraining natural language representations often in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a , our best model establishes new state-of-the-art on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. Full network pre-training (; ;) has led to a series of breakthroughs in language representation learning. Many nontrivial NLP tasks, including those that have limited training data, have greatly benefited from these pre-trained models. One of the most compelling signs of these breakthroughs is the evolution of machine performance on a reading comprehension task designed for middle and high-school English exams in China, the RACE test : the paper that originally describes the task and formulates the modeling challenge reports then state-of-the-art machine accuracy at 44.1%; the latest published reports their model performance at 83.2%; the work we present here pushes it even higher to 89.4%, a stunning 45.3% improvement that is mainly attributable to our current ability to build high-performance pretrained language representations. Evidence from these improvements reveals that a large network is of crucial importance for achieving state-of-the-art performance. It has become common practice to pre-train large models and distill them down to smaller ones for real applications. Given the importance of model size, we ask: Is having better NLP models as easy as having larger models? An obstacle to answering this question is the memory limitations of available hardware. Given that current state-of-the-art models often have hundreds of millions or even billions of parameters, it is easy to hit these limitations as we try to scale our models. Training speed can also be significantly hampered in distributed training, as the communication overhead is directly proportional to the number of parameters in the model. We also observe that simply growing the hidden size of a model such as BERT-large can lead to worse performance. Table 1 and Fig. 1 show a typical example, where we simply increase the hidden size of BERT-large to be 2x larger and get worse with this BERT-xlarge model. Model Hidden Size Parameters RACE (Accuracy) BERT-large 1024 334M 72.0% BERT-large (ours) 1024 334M 73.9% BERT-xlarge (ours) 2048 1270M 54.3% Table 1: Increasing hidden size of BERT-large leads to worse performance on RACE. Existing solutions to the aforementioned problems include model parallelization and clever memory management (; . These solutions address the memory limitation problem, but not the communication overhead and model degradation problem. In this paper, we address all of the aforementioned problems, by designing A Lite BERT (ALBERT) architecture that has significantly fewer parameters than a traditional BERT architecture. ALBERT incorporates two parameter reduction techniques that lift the major obstacles in scaling pre-trained models. The first one is a factorized embedding parameterization. By decomposing the large vocabulary embedding matrix into two small matrices, we separate the size of the hidden layers from the size of vocabulary embedding. This separation makes it easier to grow the hidden size without significantly increasing the parameter size of the vocabulary embeddings. The second technique is cross-layer parameter sharing. This technique prevents the parameter from growing with the depth of the network. Both techniques significantly reduce the number of parameters for BERT without seriously hurting performance, thus improving parameter-efficiency. An ALBERT configuration similar to BERT-large has 18x fewer parameters and can be trained about 1.7x faster. The parameter reduction techniques also act as a form of regularization that stabilizes the training and helps with generalization. To further improve the performance of ALBERT, we also introduce a self-supervised loss for sentence-order prediction (SOP). SOP primary focuses on inter-sentence coherence and is designed to address the ineffectiveness of the next sentence prediction (NSP) loss proposed in the original BERT. As a of these design decisions, we are able to scale up to much larger ALBERT configurations that still have fewer parameters than BERT-large but achieve significantly better performance. We establish new state-of-the-art on the well-known GLUE, SQuAD, and RACE benchmarks for natural language understanding. Specifically, we push the RACE accuracy to 89.4%, the GLUE benchmark to 89.4, and the F1 score of SQuAD 2.0 to 92.2. Learning representations of natural language has been shown to be useful for a wide range of NLP tasks and has been widely adopted (; ; ; ; ; 2019). One of the most significant changes in the last two years is the shift from pre-training word embeddings, whether standard or contextualized , to full-network pre-training followed by task-specific fine-tuning (; ; . In this line of work, it is often shown that larger model size improves performance. For example, show that across three selected natural language understanding tasks, using larger hidden size, more hidden layers, and more attention heads always leads to better performance. However, they stop at a hidden size of 1024. We show that, under the same setting, increasing the hidden size to 2048 leads to model degradation and hence worse performance. Therefore, scaling up representation learning for natural language is not as easy as simply increasing model size. In addition, it is difficult to experiment with large models due to computational constraints, especially in terms of GPU/TPU memory limitations. Given that current state-of-the-art models often have hundreds of millions or even billions of parameters, we can easily hit memory limits. To address this issue, propose a method called gradient checkpointing to reduce the memory requirement to be sublinear at the cost of an extra forward pass. propose a way to reconstruct each layer's activations from the next layer so that they do not need to store the intermediate activations. Both methods reduce the memory consumption at the cost of speed. In contrast, our parameter-reduction techniques reduce memory consumption and increase training speed. The idea of sharing parameters across layers has been previously explored with the Transformer architecture , but this prior work has focused on training for standard encoderdecoder tasks rather than the pretraining/finetuning setting. Different from our observations, show that networks with cross-layer parameter sharing (Universal Transformer, UT) get better performance on language modeling and subject-verb agreement than the standard transformer. Very recently, propose a Deep Equilibrium Model (DQE) for transformer networks and show that DQE can reach an equilibrium point for which the input embedding and the output embedding of a certain layer stay the same. Our observations show that our embeddings are oscillating rather than converging. combine a parameter-sharing transformer with the standard one, which further increases the number of parameters of the standard transformer. ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. Several researchers have experimented with pretraining objectives that similarly relate to discourse coherence. Coherence and cohesion in discourse have been widely studied and many phenomena have been identified that connect neighboring text segments (; ;). Most objectives found effective in practice are quite simple. Skipthought and FastSent sentence embeddings are learned by using an encoding of a sentence to predict words in neighboring sentences. Other objectives for sentence embedding learning include predicting future sentences rather than only neighbors and predicting explicit discourse markers . Our loss is most similar to the sentence ordering objective of , where sentence embeddings are learned in order to determine the ordering of two consecutive sentences. Unlike most of the above work, however, our loss is defined on textual segments rather than sentences. BERT uses a loss based on predicting whether the second segment in a pair has been swapped with a segment from another document. We compare to this loss in our experiments and find that sentence ordering is a more challenging pretraining task and more useful for certain downstream tasks. Concurrently to our work, also try to predict the order of two consecutive segments of text, but they combine it with the original next sentence prediction in a three-way classification task rather than empirically comparing the two. In this section, we present the design decisions for ALBERT and provide quantified comparisons against corresponding configurations of the original BERT architecture. The backbone of the ALBERT architecture is similar to BERT in that it uses a transformer encoder with GELU nonlinearities . We follow the BERT notation conventions and denote the vocabulary embedding size as E, the number of encoder layers as L, and the hidden size as H. , we set the feed-forward/filter size to be 4H and the number of attention heads to be H/64. There are three main contributions that ALBERT makes over the design choices of BERT. Factorized embedding parameterization. In BERT, as well as subsequent modeling improvements such as XLNet and RoBERTa, the WordPiece embedding size E is tied with the hidden layer size H, i.e., E ≡ H. This decision appears suboptimal for both modeling and practical reasons, as follows. From a modeling perspective, WordPiece embeddings are meant to learn context-independent representations, whereas hidden-layer embeddings are meant to learn context-dependent representations. As experiments with context length indicate, the power of BERT-like representations comes from the use of context to provide the signal for learning such context-dependent representations. As such, untying the WordPiece embedding size E from the hidden layer size H allows us to make a more efficient usage of the total model parameters as informed by modeling needs, which dictate that H E. From a practical perspective, natural language processing usually require the vocabulary size V to be large. 1 If E ≡ H, then increasing H increases the size of the embedding matrix, which has size V × E. This can easily in a model with billions of parameters, most of which are only updated sparsely during training. Therefore, for ALBERT we use a factorization of the embedding parameters, decomposing them into two smaller matrices. Instead of projecting the one-hot vectors directly into the hidden space of size H, we first project them into a lower dimensional embedding space of size E, and then project it to the hidden space. By using this decomposition, we reduce the embedding parameters from. This parameter reduction is significant when H E. We choose to use the same E for all word pieces because they are much more evenly distributed across documents compared to whole-word embedding, where having different embedding size (; ;) for different words is important. Cross-layer parameter sharing. For ALBERT, we propose cross-layer parameter sharing as another way to improve parameter efficiency. There are multiple ways to share parameters, e.g., only sharing feed-forward network (FFN) parameters across layers, or only sharing attention parameters. The default decision for ALBERT is to share all parameters across layers. All our experiments use this default decision unless otherwise specified We compare this design decision against other strategies in our experiments in Sec. 4.5. Similar strategies have been explored by (Universal Transformer, UT) and (Deep Equilibrium Models, DQE) for Transformer networks. Different from our observations, show that UT outperforms a vanilla Transformer. show that their DQEs reach an equilibrium point for which the input and output embedding of a certain layer stay the same. Our measurement on the L2 distances and cosine similarity show that our embeddings are oscillating rather than converging. Figure 2 shows the L2 distances and cosine similarity of the input and output embeddings for each layer, using BERT-large and ALBERT-large configurations (see Table 2). We observe that the transitions from layer to layer are much smoother for ALBERT than for BERT. These show that weight-sharing has an effect on stabilizing network parameters. Although there is a drop for both metrics compared to BERT, they nevertheless do not converge to 0 even after 24 layers. This shows that the solution space for ALBERT parameters is very different from the one found by DQE. Inter-sentence coherence loss. In addition to the masked language modeling (MLM) loss, BERT uses an additional loss called next-sentence prediction (NSP). NSP is a binary classification loss for predicting whether two segments appear consecutively in the original text, as follows: positive examples are created by taking consecutive segments from the training corpus; negative examples are created by pairing segments from different documents; positive and negative examples are sampled with equal probability. The NSP objective was designed to improve performance on downstream tasks, such as natural language inference, that require reasoning about the relationship between sentence pairs. However, subsequent studies found NSP's impact unreliable and decided to eliminate it, a decision supported by an improvement in downstream task performance across several tasks. We conjecture that the main reason behind NSP's ineffectiveness is its lack of difficulty as a task, as compared to MLM. As formulated, NSP conflates topic prediction and coherence prediction in a single task 2. However, topic prediction is easier to learn compared to coherence prediction, and also overlaps more with what is learned using the MLM loss. We maintain that inter-sentence modeling is an important aspect of language understanding, but we propose a loss based primarily on coherence. That is, for ALBERT, we use a sentence-order prediction (SOP) loss, which avoids topic prediction and instead focuses on modeling inter-sentence coherence. The SOP loss uses as positive examples the same technique as BERT (two consecutive segments from the same document), and as negative examples the same two consecutive segments but with their order swapped. This forces the model to learn finer-grained distinctions about discourse-level coherence properties. As we show in Sec. 4.6, it turns out that NSP cannot solve the SOP task at all (i.e., it ends up learning the easier topic-prediction signal, and performs at randombaseline level on the SOP task), while SOP can solve the NSP task to a reasonable degree, presumably based on analyzing misaligned coherence cues. As a , ALBERT models consistently improve downstream task performance for multi-sentence encoding tasks. We present the differences between BERT and ALBERT models with comparable hyperparameter settings in Table 2. Due to the design choices discussed above, ALBERT models have much smaller parameter size compared to corresponding BERT models. For example, ALBERT-large has about 18x fewer parameters compared to BERT-large, 18M versus 334M. If we set BERT to have an extra-large size with H = 2048, we end up with a model that has 1.27 billion parameters and under-performs (Fig. 1). In contrast, an ALBERT-xlarge configuration with H = 2048 has only 60M parameters, while an ALBERT-xxlarge configuration with H = BERT base 108M 12 768 768 False large 334M 24 1024 1024 False xlarge 1270M 24 2048 2048 False ALBERT base 12M 12 768 128 True large 18M 24 1024 128 True xlarge 60M 24 2048 128 True xxlarge 235M 12 4096 128 True Table 2: The configurations of the main BERT and ALBERT models analyzed in this paper. 4096 has 233M parameters, i.e., around 70% of BERT-large's parameters. Note that for ALBERTxxlarge, we mainly report on a 12-layer network because a 24-layer network (with the same configuration) obtains similar but is computationally more expensive. This improvement in parameter efficiency is the most important advantage of ALBERT's design choices. Before we can quantify this advantage, we need to introduce our experimental setup in more detail. To keep the comparison as meaningful as possible, we follow the BERT setup in using the BOOKCORPUS and English Wikipedia for pretraining baseline models. These two corpora consist of around 16GB of uncompressed text. We format our inputs as ", where x 1 = x 1,1, x 1,2 · · · and x 2 = x 1,1, x 1,2 · · · are two segments. We always limit the maximum input length to 512, and randomly generate input sequences shorter than 512 with a probability of 10%. Like BERT, we use a vocabulary size of 30,000, tokenized using SentencePiece as in XLNet. We generate masked inputs for the MLM targets using n-gram masking, with the length of each n-gram mask selected randomly. The probability for the length n is given by p(n) = 1/n N k=1 1/k We set the maximum length of n-gram (i.e., n) to be 3 (i.e., the MLM target can consist of up to a 3-gram of complete words, such as "White House correspondents"). All the model updates use a batch size of 4096 and a LAMB optimizer with learning rate 0.00176 . We train all models for 125,000 steps unless otherwise specified. Training was done on Cloud TPU V3. The number of TPUs used for training ranged from 64 to 1024, depending on model size. The experimental setup described in this section is used for all of our own versions of BERT as well as ALBERT models, unless otherwise specified. To monitor the training progress, we create a development set based on the development sets from SQuAD and RACE using the same procedure as in Sec. 4.1. We report accuracies for both MLM and sentence classification tasks. Note that we only use this set to check how the model is converging; it has not been used in a way that would affect the performance of any downstream evaluation, such as via model selection. and, we evaluate our models on three popular benchmarks: The General Language Understanding Evaluation (GLUE) benchmark , two versions of the Stanford Question Answering Dataset (SQuAD;, and the ReAding Comprehension from Examinations (RACE) dataset . For completeness, we provide description of these benchmarks in Appendix A.1. As in, we perform early stopping on the development sets, on which we report all comparisons except for our final comparisons based on the task leaderboards, for which we also report test set . We are now ready to quantify the impact of the design choices described in Sec. 3, specifically the ones around parameter efficiency. The improvement in parameter efficiency showcases the most important advantage of ALBERT's design choices, as shown in Table 3: with only around 70% of BERT-large's parameters, ALBERT-xxlarge achieves significant improvements over BERT-large, as measured by the difference on development set scores for several representative downstream tasks: SQuAD v1.1 (+1.9%), SQuAD v2.0 (+3.1%), MNLI (+1.4%), SST-2 (+2.2%), and RACE (+8.4%). We also observe that BERT-xlarge gets significantly worse than BERT-base on all metrics. This indicates that a model like BERT-xlarge is more difficult to train than those that have smaller parameter sizes. Another interesting observation is the speed of data throughput at training time under the same training configuration (same number of TPUs). Because of less communication and fewer computations, ALBERT models have higher data throughput compared to their corresponding BERT models. The slowest one is the BERT-xlarge model, which we use as a baseline. As the models get larger, the differences between BERT and ALBERT models become bigger, e.g., ALBERT-xlarge can be trained 2.4x faster than BERT-xlarge. Table 3: Dev set for models pretrained over BOOKCORPUS and Wikipedia for 125k steps. Here and everywhere else, the Avg column is computed by averaging the scores of the downstream tasks to its left (the two numbers of F1 and EM for each SQuAD are first averaged). Next, we perform ablation experiments that quantify the individual contribution of each of the design choices for ALBERT. Table 4 shows the effect of changing the vocabulary embedding size E using an ALBERT-base configuration setting (see Table 2), using the same set of representative downstream tasks. Under the non-shared condition (BERT-style), larger embedding sizes give better performance, but not by much. Under the all-shared condition (ALBERT-style), an embedding of size 128 appears to be the best. Based on these , we use an embedding size E = 128 in all future settings, as a necessary step to do further scaling. Table 5 presents experiments for various cross-layer parameter-sharing strategies, using an ALBERT-base configuration (Table 2) with two embedding sizes (E = 768 and E = 128). We compare the all-shared strategy (ALBERT-style), the not-shared strategy (BERT-style), and intermediate strategies in which only the attention parameters are shared (but not the FNN ones) or only the FFN parameters are shared (but not the attention ones). Table 4: The effect of vocabulary embedding size on the performance of ALBERT-base. The all-shared strategy hurts performance under both conditions, but it is less severe for E = 128 (-1.5 on Avg) compared to E = 768 (-2.5 on Avg). In addition, most of the performance drop appears to come from sharing the FFN-layer parameters, while sharing the attention parameters in no drop when E = 128 (+0.1 on Avg), and a slight drop when E = 768 (-0.7 on Avg). There are other strategies of sharing the parameters cross layers. For example, We can divide the L layers into N groups of size M, and each size-M group shares parameters. Overall, our experimental shows that the smaller the group size M is, the better the performance we get. However, decreasing group size M also dramatically increase the number of overall parameters. We choose all-shared strategy as our default choice. Table 5: The effect of cross-layer parameter-sharing strategies, ALBERT-base configuration. We compare head-to-head three experimental conditions for the additional inter-sentence loss: none (XLNet-and RoBERTa-style), NSP (BERT-style), and SOP (ALBERT-style), using an ALBERTbase configuration. Results are shown in The on the intrinsic tasks reveal that the NSP loss brings no discriminative power to the SOP task (52.0% accuracy, similar to the random-guess performance for the "None" condition). This allows us to conclude that NSP ends up modeling only topic shift. In contrast, the SOP loss does solve the NSP task relatively well (78.9% accuracy), and the SOP task even better (86.5% accuracy). Even more importantly, the SOP loss appears to consistently improve downstream task performance for multi-sentence encoding tasks (around +1% for SQuAD1.1, +2% for SQuAD2.0, +1.7% for RACE), for an Avg score improvement of around +1%. In this section, we check how depth (number of layers) and width (hidden size) affect the performance of ALBERT. Table 7 shows the performance of an ALBERT-large configuration (see Table 2) using different numbers of layers. Networks with 3 or more layers are trained by fine-tuning using the parameters from the depth before (e.g., the 12-layer network parameters are fine-tuned from the checkpoint of the 6-layer network parameters). 4 Similar technique has been used in. If we compare a 3-layer ALBERT model with a 1-layer ALBERT model, although they have the same number of parameters, the performance increases significantly. However, there are diminishing returns when continuing to increase the number of layers: the of a 12-layer network are relatively close to the of a 24-layer network, and the performance of a 48-layer network appears to decline. Table 7: The effect of increasing the number of layers for an ALBERT-large configuration. A similar phenomenon, this time for width, can be seen in Table 8 for a 3-layer ALBERT-large configuration. As we increase the hidden size, we get an increase in performance with diminishing returns. At a hidden size of 6144, the performance appears to decline significantly. We note that none of these models appear to overfit the training data, and they all have higher training and development loss compared to the best-performing ALBERT configurations. Table 8: The effect of increasing the hidden-layer size for an ALBERT-large 3-layer configuration. The speed-up in Table 3 indicate that data-throughput for BERT-large is about 3.17x higher compared to ALBERT-xxlarge. Since longer training usually leads to better performance, we perform a comparison in which, instead of controlling for data throughput (number of training steps), we control for the actual training time (i.e., let the models train for the same number of hours). In Table 9, we compare the performance of a BERT-large model after 400k training steps (after 34h of training), roughly equivalent with the amount of time needed to train an ALBERT-xxlarge model with 125k training steps (32h of training). Table 9: The effect of controlling for training time, BERT-large vs ALBERT-xxlarge configurations. After training for roughly the same amount of time, ALBERT-xxlarge is significantly better than BERT-large: +1.5% better on Avg, with the difference on RACE as high as +5.2%. In Section 4.7, we show that for ALBERT-large (H=1024), the difference between a 12-layer and a 24-layer configuration is small. Does this still hold for much wider ALBERT configurations, such as ALBERT-xxlarge (H=4096)? Number of layers SQuAD1. Table 10: The effect of a deeper network using an ALBERT-xxlarge configuration. The answer is given by the from Table 10. The difference between 12-layer and 24-layer ALBERT-xxlarge configurations in terms of downstream accuracy is negligible, with the Avg score being the same. We conclude that, when sharing all cross-layer parameters (ALBERT-style), there is no need for models deeper than a 12-layer configuration. The experiments done up to this point use only the Wikipedia and BOOKCORPUS datasets, as in. In this section, we report measurements on the impact of the additional data used by both XLNet and RoBERTa. Fig. 3a plots the dev set MLM accuracy under two conditions, without and with additional data, with the latter condition giving a significant boost. We also observe performance improvements on the downstream tasks in Table 11, except for the SQuAD benchmarks (which are Wikipedia-based, and therefore are negatively affected by out-of-domain training material). Table 11: The effect of additional training data using the ALBERT-base configuration. We also note that, even after training for 1M steps, our largest models still do not overfit to their training data. As a , we decide to remove dropout to further increase our model capacity. The plot in Fig. 3b shows that removing dropout significantly improves MLM accuracy. Intermediate evaluation on ALBERT-xxlarge at around 1M training steps (Table 12) also confirms that removing dropout helps the downstream tasks. There is empirical and theoretical evidence showing that a combination of batch normalization and dropout in Convolutional Neural Networks may have harmful . To the best of our knowledge, we are the first to show that dropout can hurt performance in large Transformer-based models. However, the underlying network structure of ALBERT is a special case of the transformer and further experimentation is needed to see if this phenomenon appears with other transformer-based architectures or not. Table 12: The effect of removing dropout, measured for an ALBERT-xxlarge configuration. The we report in this section make use of the training data used by, as well as the additional data used by and. We report state-of-the-art under two settings for fine-tuning: single-model and ensembles. In both settings, we only do single-task fine-tuning. Following, on the development set we report the median over five runs. The single-model ALBERT configuration incorporates the best-performing settings discussed: an ALBERT-xxlarge configuration (Table 2) using combined MLM and SOP losses, and no dropout. The checkpoints that contribute to the final ensemble model are selected based on development set performance; the number of checkpoints considered for this selection range from 6 to 17, depending on the task. For the GLUE (Table 13) and RACE (Table 14) benchmarks, we average the model predictions for the ensemble models, where the candidates are fine-tuned from different training steps using the 12-layer and 24-layer architectures. For SQuAD (Table 14), we average the prediction scores for those spans that have multiple probabilities; we also average the scores of the "unanswerable" decision. Both single-model and ensemble indicate that ALBERT improves the state-of-the-art significantly for all three benchmarks, achieving a GLUE score of 89.4, a SQuAD 2.0 test F1 score of 92.2, and a RACE test accuracy of 89.4. The latter appears to be a particularly strong improvement, a jump of +17.4% absolute points over BERT, +7.6% over XLNet, +6.2% over RoBERTa, and 5.3% over DCMI+, an ensemble of multiple models specifically designed for reading comprehension tasks. Our single model achieves an accuracy of 86.5%, which is still 2.4% better than the state-of-the-art ensemble model. Table 13: State-of-the-art on the GLUE benchmark. For single-task single-model , we report ALBERT at 1M steps (comparable to RoBERTa) and at 1.5M steps. The ALBERT ensemble uses models trained with 1M, 1.5M, and other numbers of steps. While ALBERT-xxlarge has less parameters than BERT-large and gets significantly better , it is computationally more expensive due to its larger structure. An important next step is thus to speed up the training and inference speed of ALBERT through methods like sparse attention and block attention . An orthogonal line of research, which could provide additional representation power, includes hard example mining efficient language modeling training. Additionally, although we have convincing evidence that sentence order prediction is a more consistently-useful learning task that leads to better language representations, we hypothesize that there could be more dimensions not yet captured by the current self-supervised training losses that could create additional representation power for the ing representations. RACE RACE is a large-scale dataset for multi-choice reading comprehension, collected from English examinations in China with nearly 100,000 questions. Each instance in RACE has 4 candidate answers. Following prior work, we use the concatenation of the passage, question, and each candidate answer as the input to models. Then, we use the representations from the "[CLS]" token for predicting the probability of each answer. The dataset consists of two domains: middle school and high school. We train our models on both domains and report accuracies on both the development set and test set. A.2 HYPERPARAMETERS Hyperparameters for downstream tasks are shown in Table 15. We adapt these hyperparameters from,, and Yang et al. (2019
A new pretraining method that establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.
981
scitldr
Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey. Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data. The recent work of Super Characters method using two-dimensional word embeddings achieved state-of-the-art in text classification tasks, showcasing the promise of this new approach. In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embeddings to address the problem of classification on tabular data. For each input of tabular data, the features are first projected into two-dimensional embeddings like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification. Experimental have shown that the proposed SuperTML method have achieved state-of-the-art on both large and small datasets. In data science, data is categorized into structured data and unstructured data. Structured data is also known as tabular data, and the terms will be used interchangeably. Anthony Goldbloom, the founder and CEO of Kaggle observed that winning techniques have been divided by whether the data was structured or unstructured BID12. Currently, DNN models are widely applied for usage on unstructured data such as image, speech, and text. According to Anthony, "When the data is unstructured, its definitely CNNs and RNNs that are carrying the day" BID12. The successful CNN model in the ImageNet competition BID8 has outperformed human for image classification task by ResNet BID6 since 2015.On the other side of the spectrum, machine learning models such as Support Vector Machine (SVM), Gradient Boosting Trees (GBT), Random Forest, and Logistic Regression, have been used to process structured data. According to a recent survey of 14,000 data scientists by , a subdivision of structured data known as relational data is reported as the most popular type of data in industry, with at least 65% working daily with relational data. Regarding structured data competitions, Anthony says that currently XGBoost is winning practically every competition in the structured data category BID4. XGBoost BID2 is one popular package implementing the Gradient Boosting method. Recent research has tried using one-dimensional embedding and implementing RNNs or one-dimensional CNNs to address the TML (Tabular Machine Learning) tasks, or tasks that deal with structured data processing BID7 BID11, and also categorical embedding for tabular data with categorical features BID5. However, this reliance upon onedimensional embeddings may soon come to change. Recent NLP research has shown that the two-dimensional embedding of the Super Characters method BID9 is capable of achieving state-of-the-art on large dataset benchmarks. The Super Characters method is a two-step method that was initially designed for text classification problems. In the first step, the characters of the input text are drawn onto a blank image. In the second step, the image is fed into two-dimensional CNN models for classification. The two-dimensional CNN models are trained by fine-tuning from pretrained models on large image dataset, e.g. ImageNet. In this paper, we propose the SuperTML method, which borrows the concept of the Super Characters method to address TML problems. For each input, tabular features are first projected onto a two-dimensional embedding and fed into fine-tuned two-dimensional CNN models for classification. The proposed SuperTML method handles the categorical type and missing values in tabular data automatically, without need for explicit conversion into numerical type values. The SuperTML method is motivated by the analogy between TML problems and text classification tasks. For any sample given in tabular form, if its features are treated like stringified tokens of data, then each sample can be represented as a concatenation of tokenized features. By applying this paradigm of a tabular sample, the existing CNN models used in Super Characters method could be extended to be applicable to TML problems. As mentioned in the introduction, the combination of twodimensional embedding (a core competency of the Super Characters methodology) and pre-trained CNN models has achieved state-of-the-art on text classification tasks. However, unlike the text classification problems studied in BID9, tabular data has features in separate dimensions. Hence, generated images of tabular data should reserve some gap between features in different dimensions in order to guarantee that features will not overlap in the generated image. SuperTML is composed of two steps, the first of which is two-dimensional embedding. This step projects features in the tabular data onto the generated images, which will be called the SuperTML images in this paper. The conversion of tabular training data to SuperTML image is illustrated in Figure 1, where a collection of samples containing four tabular features is being sorted. The second step is using pretrained CNN models to finetune on the generated SuperTML images. Figure 1 only shows the generation of SuperTML images for the training data. It should be noted that for inference, each instance of testing data goes through the same preprocessing to generate a SuperTML image (all of which use the same configuration of two-dimensional embedding) before getting fed into the CNN classification model. the generated SuperTML images. 9: return the trained CNN model on the tabular data known as SuperTML VF, is described in Algorithm 1.To make the SuperTML more autonomous and remove the dependency on feature importance calculation done in Algorithm 1, the SuperTML EF method is introduced in Algorithm 2. It allocates the same size to every feature, and thus tabular data can be directly embedded into SuperTML images without the need for calculating feature importance. This algorithm shows even better than 1, which will be described more in depth later in the experimental section. The data statistics from UCI Machine Learning Repository is shown in TAB2. " This is perhaps the best known database to be found in the pattern recognition literature"1. The Iris dataset is widely used in machine learning courses and tutorials. FIG2 shows an example of a generated SuperTML image, created using Iris data. The experimental of using SEnet-154 shown in Table 2 for each feature of the sample do 3:Draw the feature in the same font size without overlapping, such that the total features of the sample will occupy the image size as much as possible. For this dataset 2, we use SuperTML VF, which gives features different sizes on the SupterTML image according to their importance score. The feature importance score is obtained using the XGBoost package BID2. One example of a SuperTML image created using data from this dataset is shown in FIG2. The in Table 2 shows that the SuperTML method obtained a slightly better accuracy than XGBoost on this dataset. The task of this Adult dataset 3 is to predict whether a persons income is larger or smaller than 50,000 dollars per year based on a collection of surveyed data. For categorical features that are represented by strings, the Squared English Word (SEW) method BID10 Figure 3. SuperTML VF image example from Adult dataset. This sample has age = 59, capital gain = 0, capital loss = 0, hours per week = 40, fnlweight = 372020, education number = 13, occupation = "?" (missing value), marital status = "Married-civ-spouse", relationship = "Husband", workclass = "?" (missing value), education = "Bachelors", sex = "Male", race = "White", native country = "United-States".is used. One example of a generated SuperTML image is given in Figure 3. Table 2 shows the on Adult dataset. We can see that on this dataset, the SuperTML method still has a higher accuracy than the fine-tuned XGBoost model, outperforming it by 0.32% points of accuracy. The Higgs Boson Machine Learning Challenge involved a binary classification task to classify quantum events as signal or . It was hosted by Kaggle, and though the contest is over, the challenge data is available on opendata BID1. It has 25,000 training samples, and 55,000 testing samples. Each example has 30 features, each of which is stored as a real number value. In this challenge, AMS score BID0 (a) SuperTML EF event example. is used as the performance metric. FIG4 shows two examples of generated SuperTML images. TAB3 shows the comparison of different algorithms. The DNN method and XGBoost used in the first two rows are using the numerical values of the features as input to the models, which is different from the SuperTML method of using two-dimensional embeddings. It shows that SuperTML EF method gives the best AMS score of 3.979. In addition, the SuperTML EF gives better than SuperTME VF , which indicates SuperTML method can work well without the calculation of the importance scores. The proposed SuperTML method borrows the idea of twodimensional embedding from Super Characters and transfers the knowledge learned from computer vision to the structured tabular data. Experimental shows that the proposed SuperTML method has achieved state-of-the-art on both large and small tabular dataset.
Deep learning on structured tabular data using two-dimensional word embedding with fine-tuned ImageNet pre-trained CNN model.
982
scitldr
Predictive coding, within theoretical neuroscience, and variational autoencoders, within machine learning, both involve latent Gaussian models and variational inference. While these areas share a common origin, they have evolved largely independently. We outline connections and contrasts between these areas, using their relationships to identify new parallels between machine learning and neuroscience. We then discuss specific frontiers at this intersection: backpropagation, normalizing flows, and attention, with mutual benefits for both fields. Perception has been conventionally formulated as hierarchical feature detection, similar to discriminative deep networks. In contrast, predictive coding and variational autoencoders (VAEs) frame perception as a generative process, modeling data observations to learn and infer aspects of the external environment. Specifically, both areas model observations, x, using latent variables, z, through a probabilistic model, p θ (x, z) = p θ (x|z)p θ (z). Both areas also use variational inference, introducing an approximate posterior, q(z|x), to infer z and learn the model parameters, θ. These similarities are the of a common origin, with Mumford, Dayan et al., and others formalizing earlier ideas. However, since their inception, these areas have developed largely independently. We explore their relationships (see also ) and highlight opportunities for the transfer of ideas. In identifying these ties, we hope to strengthen this promising, close connection between neuroscience and machine learning, prompting further investigation. Predictive Coding Predictive coding is a theory of thalamocortical function, in which the cortex constructs a probabilistic generative model of sensory inputs, using approximate inference to perform state estimation. Top-down neural projections convey predictions of lower-level activity, while bottom-up projections convert the prediction error at each level into an updated state estimate. Such models are often formulated with hierarchies of Gaussian distributions, with analytical non-linear (e.g. polynomial) functions parameterizing the generative mappings: Variational inference is performed using gradient-based optimization on the mean of q(z|x) = N (z ; µ q,, Σ q,), yielding gradients which are linear combinations of (prediction) errors, e.g. where L is the objective, J = ∂µ θ,x /∂µ q,1 is the Jacobian, and ε x and ε 1 are weighted errors, i.e. ε x = Σ −1 x (x − µ θ,x). Parameter learning can also be performed using gradient-based optimization. We discuss connections between these models and neuroscience in Section 3. Variational Autoencoders VAEs are a class of Bayesian machine learning models, combining latent Gaussian models with deep neural networks. They consist of an encoder network with parameters φ, parameterizing q φ (z|x), and a decoder network, parameterizing p θ (x|z). Thus, rather than performing gradient-based inference, VAEs amortize inference optimization with a learned network, improving computational efficiency. These networks can either take a direct form, e.g. µ q ← NN φ (x), or an iterative form, e.g. where NN φ denotes a deep neural network. In both cases, gradients are obtained by reparameterizing stochastic samples, z ∼ q(z|x), separating stochastic and deterministic dependencies to enable differentiation through sampling. The parameters, θ and φ, are learned using gradient-based optimization, with gradients calculated via backpropagation. Predictive coding and VAEs are both conventionally formulated as hierarchical latent Gaussian models, with non-linear functions parameterizing the conditional dependencies between variables. In the case of predictive coding, these functions are often polynomials, whereas VAEs use deep networks, which are composed of layers of linear and non-linear operations. In predictive coding, Gaussian covariance matrices, e.g. Σ x, have been treated as separate parameters, implemented as lateral weights between units at each level. A similar, but more general, mechanism was independently developed for VAEs, known as normalizing flows (Section 4.2). Both areas have been extended to sequential models. In this setting, predictive coding tends to model dynamics explicitly, directly modeling orders of motion or generalized coordinates. VAEs, in contrast, tend to rely on less rigid forms of dynamics, often using recurrent networks, e.g., though some works have explored structured dynamics. Both areas use gradient-based learning. In practice, however, learning in predictive coding tends to be minimal, while VAEs use learning extensively, scaling to large image and audio datasets, e.g.. Predictive coding and VAEs both use variational inference, often setting q(z|x) as Gaussian at each level of latent variables. Predictive coding uses errors (Eq. 2) to perform gradient-based inference; note that this is a direct of assuming Gaussian priors and conditional likelihood. In contrast, VAEs use amortized inference, learning to infer. This offers a solution to the so-called "weight transport" problem for predictive coding; inference gradients require the Jacobian of the generative model (Eq. 2), which includes the transpose of generative weight matrices. Learning a separate set of inference weights avoids this problem, however, separate learning mechanisms are required for these inference weights. The benefit of identifying these connections and contrasts is that they link neuroscience, through predictive coding and VAEs, to machine learning (and vice versa). While still under debate (c) Attention mechanisms can be implemented using the precision (inverse variance) of predictions. Weighting prediction errors biases inference toward representing highly precise dimensions. In the diagram, various strengths of neuromodulation, corresponding to the precision of predictions, adjust the gain of error neurons. biological correspondences of predictive coding have been proposed. Variable estimates and errors are hypothesized to be represented by neural activity (firing rate or membrane potential). These neurons would occur within the same cortical column, with variable neurons and error neurons in separate cortical layers. Interneurons could mediate inversion of errors and predictions, as well as lateral inhibition necessary for covariance matrices and attention (Section 4.3). Although many of the biological details remain unclear, one intriguing point emerges from this analysis: deep networks appear between variables, parameterizing the (non-linear) relationships between them. If variables represent neural activity, then deep networks are analogous to the mappings between neurons at separate levels in the hierarchy, i.e., dendrites. Thus, deep networks may more closely correspond with (a parallel ensemble of) dendrites, rather than entire networks of neurons. We discuss this implication and others in the following section. The biological correspondence of backpropagation remains an open question. Backprop requires global information, whereas biology seems to rely largely on local learning rules. A number of biologically-plausible formulations of backprop have been proposed, attempting to reconcile this disparity and others. However, recent formulations of learning in latent variable models offer an alternative perspective: prediction errors at each level of the latent hierarchy provide a local signal, capable of driving learning of inference and generative weights. In Section 3, we noted that deep networks appear between each latent level, suggesting a correspondence with dendrites rather than the traditional analogy as networks of neurons. This implies the following set-up: learning across the cortical hierarchy is handled via local errors at each level, whereas learning within the neurons at each level is mediated through a mechanism similar to backpropagation. Indeed, looking at the literature, we see ample evidence of non-linear dendritic computation and backpropagating action potentials within neurons (Fig. 2a). From this perspective, segmented dendrites for top-down and bottom-up inputs to pyramidal neurons could implement separate inference computations for errors at different levels (Eq. 2). While the mechanisms underlying these processes remain unclear, focusing efforts on formulating biologically plausible backpropagation from this perspective (and not supervised learning) could prove fruitful. We often consider factorized parametric distributions, as they enable efficient evaluation and sampling. However, simple distributions are limiting. Normalizing flows (NFs) provide added complexity while maintaining tractable evaluation and sampling. They consist of a tractable base distribution and one or more invertible transforms. With the base distribution as p θ (u) and the transforms as v = f θ (u), the probability p θ (v) is given by the change of variables formula: where det(·) denotes matrix determinant and | · | denotes absolute value. The determinant term corrects for the local scaling of space when moving from u to v. A popular family of transforms is that of autoregressive affine transforms. One example is given by where v i is the i th dimension of v and α θ and β θ are functions. The inverse transform (Fig. 2b) is a normalization (whitening) operation. Thus, we can sample from complex distributions by starting with simple distributions and applying local affine transforms. Conversely, we can evaluate inputs from complex distributions by applying normalization transforms, then evaluating in a simpler space. Local inhibition is ubiquitous in neural systems, thought to implement normalization. These circuits, modeled with subtractive and divisive operations (Eq. 5), give rise to decorrelation in retina, LGN, and cortex. NFs offer a novel description of these circuits and agree with predictive coding. For instance, evaluating flow-based conditional likelihoods involves whitening the observations, as performed by Rao & Ballard, to remove low-level spatial correlations. The same principle can be applied across time, where NFs resemble temporal derivatives, which are the basis of Friston's generalized coordinates. Likewise, Friston's proposal of implementing prior covariance matrices with lateral weights in cortex corresponds to a linear NF. Local inhibition is also present in central pattern generator (CPG) circuits, giving rise to correlations in muscle activation. NFs are also being explored in the context of action selection in reinforcement learning. By providing a basis of correlated motor outputs, NFs improve action selection and learning, which can take place in a less correlated space that is easier to model. CPGs would likely be a form of inverse autoregressive flow to maintain efficient sampling. Predictive coding has proposed that prior covariance matrices, which weight prediction errors, could implement a form of attention (Fig. 2c). Intuitively, decreasing the variance of a predicted variable pushes the model to more accurately infer and predict that variable. Biologically, this is hypothesized to be implemented via gain modulation of error-encoding neurons, mediated through neurotransmitters and synchronizing gamma oscillations. This attentional control mechanism could bias a model toward representing task-relevant information. Deep latent variable models have largely ignored this functionality; when combined with active components, variances are typically held constant, e.g.. Enabling this capacity for task-dependent perceptual modulation may prove useful or even essential in applying deep latent variable models to complex tasks. We have identified commonalities between predictive coding and VAEs, discussing new frontiers ing from this perspective. Reuniting these areas may strengthen the connection between neuroscience and machine learning. Further refining this connection could lead to mutual benefits: neuroscience can offer inspiration for investigation in machine learning, and machine learning can evaluate ideas on real-world datasets and environments. Indeed, despite some push back, if predictive coding and related theories are to become validated descriptions of the brain and overcome their apparent generality, they will likely require the computational tools and ideas of modern machine learning to pin down and empirically compare design choices.
connections between predictive coding and VAEs + new frontiers
983
scitldr
Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound. There has been tremendous progress in first-order optimization algorithms for training deep neural networks. One of the most dominant algorithms is stochastic gradient descent (SGD) BID15, which performs well across many applications in spite of its simplicity. However, there is a disadvantage of SGD that it scales the gradient uniformly in all directions. This may lead to poor performance as well as limited training speed when the training data are sparse. To address this problem, recent work has proposed a variety of adaptive methods that scale the gradient by square roots of some form of the average of the squared values of past gradients. Examples of such methods include ADAM BID7, ADAGRAD BID2 and RMSPROP BID16. ADAM in particular has become the default algorithm leveraged across many deep learning frameworks due to its rapid training speed BID17.Despite their popularity, the generalization ability and out-of-sample behavior of these adaptive methods are likely worse than their non-adaptive counterparts. Adaptive methods often display faster progress in the initial portion of the training, but their performance quickly plateaus on the unseen data (development/test set) BID17. Indeed, the optimizer is chosen as SGD (or with momentum) in several recent state-of-the-art works in natural language processing and computer vision BID11 BID18, wherein these instances SGD does perform better than adaptive methods. BID14 have recently proposed a variant of ADAM called AMSGRAD, hoping to solve this problem. The authors provide a theoretical guarantee of convergence but only illustrate its better performance on training data. However, the generalization ability of AMSGRAD on unseen data is found to be similar to that of ADAM while a considerable performance gap still exists between AMSGRAD and SGD BID6 BID1.In this paper, we first conduct an empirical study on ADAM and illustrate that both extremely large and small learning rates exist by the end of training. The correspond with the perspective pointed out by BID17 that the lack of generalization performance of adaptive methods may stem from unstable and extreme learning rates. In fact, introducing non-increasing learning rates, the key point in AMSGRAD, may help abate the impact of huge learning rates, while it neglects possible effects of small ones. We further provide an example of a simple convex optimization problem to elucidate how tiny learning rates of adaptive methods can lead to undesirable non-convergence. In such settings, RMSPROP and ADAM provably do not converge to an optimal solution, and furthermore, however large the initial step size α is, it is impossible for ADAM to fight against the scale-down term. Based on the above analysis, we propose new variants of ADAM and AMSGRAD, named AD-ABOUND and AMSBOUND, which do not suffer from the negative impact of extreme learning rates. We employ dynamic bounds on learning rates in these adaptive methods, where the lower and upper bound are initialized as zero and infinity respectively, and they both smoothly converge to a constant final step size. The new variants can be regarded as adaptive methods at the beginning of training, and they gradually and smoothly transform to SGD (or with momentum) as time step increases. In this framework, we can enjoy a rapid initial training process as well as good final generalization ability. We provide a convergence analysis for the new variants in the convex setting. We finally turn to an empirical study of the proposed methods on various popular tasks and models in computer vision and natural language processing. Experimental demonstrate that our methods have higher learning speed early in training and in the meantime guarantee strong generalization performance compared to several adaptive and non-adaptive methods. Moreover, they can bring considerable improvement over their prototypes especially on complex deep networks. Notations Given a vector θ ∈ R d we denote its i-th coordinate by θ i; we use θ k to denote elementwise power of k and θ to denote its 2 -norm; for a vector θ t in the t-th iteration, the i-th coordinate of θ t is denoted as θ t,i by adding a subscript i. Given two vectors v, w ∈ R d, we use v, w to denote their inner product, v w to denote element-wise product, v/w to denote element-wise division, max(v, w) to denote element-wise maximum and min(v, w) to denote element-wise minimum. We use S d + to denote the set of all positive definite d × d matrices. For a vector a ∈ R d and a positive definite matrix M ∈ R d×d, we use a/M to denote M −1 a and DISPLAYFORM0 Online convex programming A flexible framework to analyze iterative optimization methods is the online optimization problem. It can be formulated as a repeated game between a player (the algorithm) and an adversary. At step t, the algorithm chooses an decision x t ∈ F, where F ⊂ R d is a convex feasible set. Then the adversary chooses a convex loss function f t and the algorithm incurs loss f t (x t). The difference between the total loss T t=1 f t (x t) and its minimum value for a fixed decision is known as the regret, which is represented by DISPLAYFORM1 Throughout this paper, we assume that the feasible set F has bounded diameter and ∇f t (x) ∞ is bounded for all t ∈ [T] and x ∈ F. We are interested in algorithms with little regret. Formally speaking, our aim is to devise an algorithm that ensures R T = o(T), which implies that on average, the model's performance converges to the optimal one. It has been pointed out that an online optimization algorithm with vanishing average regret yields a corresponding stochastic optimization algorithm BID0. Thus, following BID14, we use online gradient descent and stochastic gradient descent synonymously. A generic overview of optimization methods We follow BID14 to provide a generic framework of optimization methods in Algorithm 1 that encapsulates many popular adaptive and non-adaptive methods. This is useful for understanding the properties of different optimization methods. Note that the algorithm is still abstract since the functions φ t: F t → R d and ψ t: DISPLAYFORM2 + have not been specified. In this paper, we refer to α as initial step size and α t / √ V t as Algorithm 1 Generic framework of optimization methods Input: x 1 ∈ F, initial step size α, sequence of functions {φ t, ψ t} 1: for t = 1 to T do 2: DISPLAYFORM0 Vt (x t+1) 7: end for learning rate of the algorithm. Note that we employ a design of decreasing step size by α t = α/ √ t for it is required for theoretical proof of convergence. However such an aggressive decay of step size typically translates into poor empirical performance, while a simple constant step size α t = α usually works well in practice. For the sake of clarity, we will use the decreasing scheme for theoretical analysis and the constant schemem for empirical study in the rest of the paper. Under such a framework, we can summarize the popular optimization methods in Table 1. 1 A few remarks are in order. We can see the scaling term ψ t is I in SGD(M), while adaptive methods introduce different kinds of averaging of the squared values of past gradients. ADAM and RMSPROP can be seen as variants of ADAGRAD, where the former ones use an exponential moving average as function ψ t instead of the simple average used in ADAGRAD. In particular, RMSPROP is essentially a special case of ADAM with β 1 = 0. AMSGRAD is not listed in the table as it does not has a simple expression of ψ t. It can be defined as ψ t = diag(v t) wherev t is obtained by the following recursion: DISPLAYFORM1 The definition of φ t is same with that of ADAM. In the rest of the paper we will mainly focus on ADAM due to its generality but our arguments also apply to other similar adaptive methods such as RMSPROP and AMSGRAD. Table 1: An overview of popular optimization methods using the generic framework. DISPLAYFORM2 3 THE NON-CONVERGENCE CAUSED BY EXTREME LEARNING RATEIn this section, we elaborate the primary defect in current adaptive methods with a preliminary experiment and a rigorous proof. As mentioned above, adaptive methods like ADAM are observed to perform worse than SGD. BID14 proposed AMSGRAD to solve this problem but recent work has pointed out AMSGRAD does not show evident improvement over ADAM BID6 BID1. Since AMSGRAD is claimed to have a smaller learning rate compared with ADAM, the authors only consider large learning rates as the cause for bad performance of ADAM. However, small ones might be a pitfall as well. Thus, we speculate both extremely large and small learning rates of ADAM are likely to account for its ordinary generalization ability. For corroborating our speculation, we sample learning rates of several weights and biases of ResNet-34 on CIFAR-10 using ADAM. Specifically, we randomly select nine 3 × 3 convolutional kernels from different layers and the biases in the last linear layer. As parameters of the same layer usually have similar properties, here we only demonstrate learning rates of nine weights sampled from nine kernels respectively and one bias from the last layer by the end of training, and employ a heatmap to visualize them. As shown in Figure 1, we can find that when the model is close to convergence, learning rates are composed of tiny ones less than 0.01 as well as huge ones greater than 1000.w1 w2 w3 w4 w5 w6 w7 w8 w9 b -5.8 -3.7 -3.4 -3.7 4.5 -3 8.6 2 -1.6 -4Figure 1: Learning rates of sampled parameters. Each cell contains a value obtained by conducting a logarithmic operation on the learning rate. The lighter cell stands for the smaller learning rate. The above analysis and observation show that there are indeed learning rates which are too large or too small in the final stage of the training process. AMSGRAD may help abate the impact of huge learning rates, but it neglects the other side of the coin. Insofar, we still have the following two doubts. First, does the tiny learning rate really do harm to the convergence of ADAM? Second, as the learning rate highly depends on the initial step size, can we use a relatively larger initial step size α to get rid of too small learning rates?To answer these questions, we show that undesirable convergence behavior for ADAM and RM-SPROP can be caused by extremely small learning rates, and furthermore, in some cases no matter how large the initial step size α is, ADAM will still fail to find the right path and converge to some highly suboptimal points. Consider the following sequence of linear functions for F = [−1, 1]: DISPLAYFORM3 where C ∈ N satisfies: 5β DISPLAYFORM4 For this function sequence, it is easy to see that the point x = −1 provides the minimum regret. Supposing β 1 = 0, we show that ADAM converges to a highly suboptimal solution of x ≥ 0 for this setting. Intuitively, the reasoning is as follows. The algorithm obtains a gradient −1 once every C steps, which moves the algorithm in the wrong direction. Then, at the next step it observes a gradient 2. But the larger gradient 2 is unable to counteract the effect to wrong direction since the learning rate at this step is scaled down to a value much less than the previous one, and hence x becomes larger and larger as the time step increases. We formalize this intuition in the below. Theorem 1. There is an online convex optimization problem where for any initial step size α, ADAM has non-zero average regret i.e., R T /T 0 as T → ∞.We relegate all proofs to the appendix. Note that the above example also holds for constant step size α t = α. Also note that vanilla SGD does not suffer from this problem. There is a wide range of valid choices of initial step size α where the average regret of SGD asymptotically goes to 0, in other words, converges to the optimal solution. This problem can be more obvious in the later stage of a training process in practice when the algorithm gets stuck in some suboptimal points. In such cases, gradients at most steps are close to 0 and the average of the second order momentum may be highly various due to the property of exponential moving average. Therefore, "correct" signals which appear with a relatively low frequency (i.e. gradient 2 every C steps in the above example) may not be able to lead the algorithm to a right path, if they come after some "wrong" signals (i.e. gradient 1 in the example), even though the correct ones have larger absolute value of gradients. One may wonder if using large β 1 helps as we usually use β 1 close to 1 in practice. However, the following shows that for any constant β 1 and β 2 with β 1 < √ β 2, there exists an example where ADAM has non-zero average regret asymptotically regardless of the initial step size α. Theorem 2. For any constant β 1, β 2 ∈ such that β 1 < √ β 2, there is an online convex optimization problem where for any initial step size α, ADAM has non-zero average regret i.e., DISPLAYFORM5 Furthermore, a stronger stands in the easier stochastic optimization setting. Theorem 3. For any constant β 1, β 2 ∈ such that β 1 < √ β 2, there is a stochastic convex optimization problem where for any initial step size α, ADAM does not converge to the optimal solution. Remark. The analysis of ADAM in BID7 relies on decreasing β 1 over time, while here we use constant β 1. Indeed, since the critical parameter is β 2 rather than β 1 in our analysis, it is quite easy to extend our examples to the case using decreasing scheme of β 1.As mentioned by BID14, the condition β 1 < √ β 2 is benign and is typically satisfied in the parameter settings used in practice. Such condition is also assumed in convergence proof of BID7. The above illustrate the potential bad impact of extreme learning rates and algorithms are unlikely to achieve good generalization ability without solving this problem. In this section we develop new variants of optimization methods and provide their convergence analysis. Our aim is to devise a strategy that combines the benefits of adaptive methods, viz. fast initial progress, and the good final generalization properties of SGD. Intuitively, we would like to construct an algorithm that behaves like adaptive methods early in training and like SGD at the end. DISPLAYFORM0 Inspired by gradient clipping, a popular technique used in practice that clips the gradients larger than a threshold to avoid gradient explosion, we employ clipping on learning rates in ADAM to propose ADABOUND in Algorithm 2. Consider applying the following operation in ADAM DISPLAYFORM1 which clips the learning rate element-wisely such that the output is constrained to be in [η l, η u]. It follows that SGD(M) with α = α * can be considered as the case where η l = η u = α *. As for ADAM, η l = 0 and η u = ∞. Now we can provide the new strategy with the following steps. We employ η l and η u as functions of t instead of constant lower and upper bound, where η l (t) is a non-decreasing function that starts from 0 as t = 0 and converges to α * asymptotically; and η u (t) is a non-increasing function that starts from ∞ as t = 0 and also converges to α * asymptotically. In this setting, ADABOUND behaves just like ADAM at the beginning as the bounds have very little impact on learning rates, and it gradually transforms to SGD(M) as the bounds become more and more restricted. We prove the following key for ADABOUND. Theorem 4. Let {x t} and {v t} be the sequences obtained from Algorithm 2, DISPLAYFORM0 and x ∈ F. For x t generated using the ADABOUND algorithm, we have the following bound on the regret DISPLAYFORM1 The following falls as an immediate corollary of the above . Corollary 4.1. Suppose β 1t = β 1 λ t−1 in Theorem 4, we have DISPLAYFORM2 It is easy to see that the regret of ADABOUND is upper bounded by O(et al., one can use a much more modest momentum decay of β 1t = β 1 /t and still ensure a regret of O(√ T). It should be mentioned that one can also incorporate the dynamic bound in AMSGRAD. The ing algorithm, namely AMSBOUND, also holds a regret of O(√ T) and the proof of convergence is almost same to Theorem 4 (see Appendix F for details). In next section we will see that AMSBOUND has similar performance to ADABOUND in several well-known tasks. DISPLAYFORM3 We end this section with a comparison to the previous work. For the idea of transforming ADAM to SGD, there is a similar work by BID6. The authors propose a measure that uses ADAM at first and switches the algorithm to SGD at some specific step. Compared with their approach, our methods have two advantages. First, whether there exists a fixed turning point to distinguish ADAM and SGD is uncertain. So we address this problem with a continuous transforming procedure rather than a "hard" switch. Second, they introduce an extra hyperparameter to decide the switching time, which is not very easy to fine-tune. As for our methods, the flexible parts introduced are two bound functions. We conduct an empirical study of the impact of different kinds of bound functions. The are placed in Appendix G for we find that the convergence target α * and convergence speed are not very important to the final . For the sake of clarity, we will use η l (t) = 0.1− 0.1 (1−β2)t+1 and η u (t) = 0.1+ 0.1(1−β2)t in the rest of the paper unless otherwise specified. In this section, we turn to an empirical study of different models to compare new variants with popular optimization methods including SGD(M), ADAGRAD, ADAM, and AMSGRAD. We focus on three tasks: the MNIST image classification task BID9, the CIFAR-10 image classification task BID8, and the language modeling task on Penn Treebank BID12. We choose them due to their broad importance and availability of their architectures for reproducibility. The setup for each task is detailed in TAB0. We run each experiment three times with the specified initialization method from random starting points. A fixed budget on the number of epochs is assigned for training and the decay strategy is introduced in following parts. We choose the settings that achieve the lowest training loss at the end. Optimization hyperparameters can exert great impact on ultimate solutions found by optimization algorithms so here we describe how we tune them. To tune the step size, we follow the method in BID17. We implement a logarithmically-spaced grid of five step sizes. If the best performing parameter is at one of the extremes of the grid, we will try new grid points so that the best performing parameters are at one of the middle points in the grid. Specifically, we tune over hyperparameters in the following way. For tuning the step size of SGD(M), we first coarsely tune the step size on a logarithmic scale from {100, 10, 1, 0.1, 0.01} and then fine-tune it. Whether the momentum is used depends on the specific model but we set the momentum parameter to default value 0.9 for all our experiments. We find this strategy effective given the vastly different scales of learning rates needed for different modalities. For instance, SGD with α = 10 performs best for language modeling on PTB but for the ResNet-34 architecture on CIFAR-10, a learning rate of 0.1 for SGD is necessary. ADAGRAD The initial set of step sizes used for ADAGRAD are: {5e-2, 1e-2, 5e-3, 1e-3, 5e-4}. For the initial accumulator value, we choose the recommended value as 0.ADAM & AMSGRAD We employ the same hyperparameters for these two methods. The initial step sizes are chosen from: {1e-2, 5e-3, 1e-3, 5e-4, 1e-4}. We turn over β 1 values of {0.9, 0.99} and β 2 values of {0.99, 0.999}. We use for the perturbation value = 1e-8.ADABOUND & AMSBOUND We directly apply the default hyperparameters for ADAM (a learning rate of 0.001, β 1 = 0.9 and β 2 = 0.999) in our proposed methods. Note that for other hyperparameters such as batch size, dropout probability, weight decay and so on, we choose them to match the recommendations of the respective base architectures. We train a simple fully connected neural network with one hidden layer for the multiclass classification problem on MNIST dataset. We run 100 epochs and omit the decay scheme for this experiment. FIG0 shows the learning curve for each optimization method on both the training and test set. We find that for training, all algorithms can achieve the accuracy approaching 100%. For the test part, SGD performs slightly better than adaptive methods ADAM and AMSGRAD. Our two proposed methods, ADABOUND and AMSBOUND, display slight improvement, but compared with their prototypes there are still visible increases in test accuracy. Using DenseNet-121 BID5 and ResNet-34 BID3, we then consider the task of image classification on the standard CIFAR-10 dataset. In this experiment, we employ the fixed budget of 200 epochs and reduce the learning rates by 10 after 150 epochs. DenseNet We first run a DenseNet-121 model on CIFAR-10 and our are shown in FIG1. We can see that adaptive methods such as ADAGRAD, ADAM and AMSGRAD appear to perform better than the non-adaptive ones early in training. But by epoch 150 when the learning rates are decayed, SGDM begins to outperform those adaptive methods. As for our methods, ADABOUND and AMSBOUND, they converge as fast as adaptive ones and achieve a bit higher accuracy than SGDM on the test set at the end of training. In addition, compared with their prototypes, their performances are enhanced evidently with approximately 2% improvement in the test accuracy. ResNet Results for this experiment are reported in FIG1. As is expected, the overall performance of each algorithm on ResNet-34 is similar to that on DenseNet-121. ADABOUND and AMSBOUND even surpass SGDM by 1%. Despite the relative bad generalization ability of adaptive methods, our proposed methods overcome this drawback by allocating bounds for their learning rates and obtain almost the best accuracy on the test set for both DenseNet and ResNet on CIFAR-10. Finally, we conduct an experiment on the language modeling task with Long Short-Term Memory (LSTM) network BID4. From two experiments above, we observe that our methods show much more improvement in deep convolutional neural networks than in perceptrons. Therefore, we suppose that the enhancement is related to the complexity of the architecture and run three models with (L1) 1-layer, (L2) 2-layer and (L3) 3-layer LSTM respectively. We train them on Penn Treebank, running for a fixed budget of 200 epochs. We use perplexity as the metric to evaluate the performance and report in We find that in all models, ADAM has the fastest initial progress but stagnates in worse performance than SGD and our methods. Different from phenomena in previous experiments on the image classification tasks, ADABOUND and AMSBOUND does not display rapid speed at the early training stage but the curves are smoother than that of SGD.Comparing L1, L2 and L3, we can easily notice a distinct difference of the improvement degree. In L1, the simplest model, our methods perform slightly 1.1% better than ADAM while in L3, the most complex model, they show evident improvement over 2.8% in terms of perplexity. It serves as evidence for the relationship between the model's complexity and the improvement degree. To investigate the efficacy of our proposed algorithms, we select popular tasks from computer vision and natural language processing. Based on shown above, it is easy to find that ADAM and AMSGRAD usually perform similarly and the latter does not show much improvement for most cases. Their variants, ADABOUND and AMSBOUND, on the other hand, demonstrate a fast speed of convergence compared with SGD while they also exceed two original methods greatly with respect to test accuracy at the end of training. This phenomenon exactly confirms our view mentioned in Section 3 that both large and small learning rates can influence the convergence. Besides, we implement our experiments on models with different complexities, consisting of a perceptron, two deep convolutional neural networks and a recurrent neural network. The perceptron used on the MNIST is the simplest and our methods perform slightly better than others. As for DenseNet and ResNet, obvious increases in test accuracy can be observed. We attribute this difference to the complexity of the model. Specifically, for deep CNN models, convolutional and fully connected layers play different parts in the task. Also, different convolutional layers are likely to be responsible for different roles BID10, which may lead to a distinct variation of gradients of parameters. In other words, extreme learning rates (huge or tiny) may appear more frequently in complex models such as ResNet. As our algorithms are proposed to avoid them, the greater enhancement of performance in complex architectures can be explained intuitively. The higher improvement degree on LSTM with more layers on language modeling task also consists with the above analysis. Despite superior of our methods, there still remain several problems to explore. For example, the improvement on simple models are not very inspiring, we can investigate how to achieve higher improvement on such models. Besides, we only discuss reasons for the weak generalization ability of adaptive methods, however, why SGD usually performs well across diverse applications of machine learning still remains uncertain. Last but not least, applying dynamic bounds on learning rates is only one particular way to conduct gradual transformation from adaptive methods to SGD. There might be other ways such as well-designed decay that can also work, which remains to explore. We investigate existing adaptive algorithms and find that extremely large or small learning rates can in the poor convergence behavior. A rigorous proof of non-convergence for ADAM is provided to demonstrate the above problem. Motivated by the strong generalization ability of SGD, we design a strategy to constrain the learning rates of ADAM and AMSGRAD to avoid a violent oscillation. Our proposed algorithms, AD-ABOUND and AMSBOUND, which employ dynamic bounds on their learning rates, achieve a smooth transition to SGD. They show the great efficacy on several standard benchmarks while maintaining advantageous properties of adaptive methods such as rapid initial progress and hyperparameter insensitivity. We thank all reviewers for providing the constructive suggestions. We also thank Junyang Lin and Ruixuan Luo for proofreading and doing auxiliary experiments. Xu Sun is the corresponding author of this paper. Lemma 1 (Mcmahan & Streeter FORMULA17). For any Q ∈ S d + and convex feasible set F ⊂ R d, suppose DISPLAYFORM0 Proof. We provide the proof here for completeness. Since u 1 = min x∈F Q 1/2 (x − z 1) and u 2 = min x∈F Q 1/2 (x − z 2) and from the property of projection operator we have the following: DISPLAYFORM1 Combining the above inequalities, we have DISPLAYFORM2 Also, observe the following: DISPLAYFORM3 The above inequality can be obtained from the fact that DISPLAYFORM4 and rearranging the terms. Combining the above inequality with Equation FORMULA17, we have the required the . Lemma 2. Suppose m t = β 1 m t−1 + (1 − β 1)g t with m 0 = 0 and 0 ≤ β 1 < 1. We have DISPLAYFORM5 Proof. If β 1 = 0, the equality directly holds due to m t = g t. Otherwise, 0 < β 1 < 1. For any θ > 0 we have DISPLAYFORM6 The inequality follows from Cauchy-Schwarz and Young's inequality. In particular, let θ = 1/β 1 − 1. Then we have DISPLAYFORM7 Dividing both sides by β t 1, we get DISPLAYFORM8 Then multiplying both sides by β t 1 we obtain DISPLAYFORM9 Published as a conference paper at ICLR 2019Take the summation of above inequality over t = 1, 2, · · ·, T, we have DISPLAYFORM10 The second inequality is due to the following fact of geometric series DISPLAYFORM11 We complete the proof. Proof. First, we rewrite the update of ADAM in Algorithm 1 in the following recursion form: DISPLAYFORM0 where m 0,i = 0 and v 0,i = 0 for all i ∈ [d] and ψ t = diag(v t). We consider the setting where f t are linear functions and F = [−1, 1]. In particular, we define the following function sequence: DISPLAYFORM1 for t mod C = 1; 2x, for t mod C = 2; 0, otherwise where C ∈ N satisfies the following: DISPLAYFORM2 It is not hard to see that the condition hold for large constant C that depends on β 2.Since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret. Consider the execution of ADAM algorithm for this sequence of functions with β 1 = 0. Note that since gradients of these functions are bounded, F has bounded D ∞ diameter and β 2 1 /β 2 < 1 as β 1 = 0, the conditions on the parameters required for ADAM are satisfied BID7. The gradients have the following form: DISPLAYFORM3 for i mod C = 2; 0, otherwise. Let τ ∈ N, τ > 1 be such that DISPLAYFORM4 DISPLAYFORM5 for all t ≥ τ. We start with the following preliminary . Lemma 3. For the parameter settings and conditions assumed in Theorem 1, there is a t ≥ τ such that x Ct +1 ≥ 0.Proof by contradiction. Assume that x Ct+1 < 0 for all t ≥ τ. Firstly, for t ≥ τ, we observe the following inequalities: DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 From the (Cτ + 1)-th update of ADAM in Equation, we obtain: DISPLAYFORM9 The first inequality follows from x Ct+1 < 0 and Equation. The last inequality follows from Equation. Therefore, we have −1 ≤ x Cτ +1 <x Cτ +2 < 1 and hence x Cτ +2 =x Cτ +2. Then after the (Cτ + 2)-th update, we have: DISPLAYFORM10 where κ = αβ 2 (1 − β 2)/162 √ C is a constant that depends on α, β 2 and C. The first inequality follows from Equation. The second inequality follows from Equations FORMULA34 and. The last inequality is due to the following lower bound: DISPLAYFORM11 where the last inequality follows from Equation. Therefore, we have −1 ≤ x Cτ +1 <x Cτ +3 < x Cτ +2 < 1. Furthermore, since gradients ∇f i (x) = 0 when i mod C = 1 or 2, we have DISPLAYFORM12 Then, following Equation FORMULA37 we have DISPLAYFORM13 Similarly, we can subsequently obtain DISPLAYFORM14 and generally DISPLAYFORM15 for t ≥ τ. Let t be such that 2κ(DISPLAYFORM16 This contradicts the assumption that x Ct+1 < 0 for all t ≥ τ . We complete the proof of this lemma. We now return to the proof of Theorem 1. The following analysis focuses on iterations after Ct + 1 such that x Ct +1 ≥ 0. Note that any regret before Ct + 1 is just a constant since t is independent of T and thus, the average regret is negligible as T → ∞.Our claim is that, x k ≥ 0 for all k ∈ N, k ≥ Ct + 1. To prove this, we resort to the principle of mathematical induction. Suppose for some t ∈ N, t ≥ t, we have x Ct+1 ≥ 0. Our aim is to prove that x i ≥ 0 for all i ∈ N ∩ [Ct + 2, C(t + 1) + 1].From the (Ct + 1)-th update of ADAM in Equation FORMULA27, we obtain: DISPLAYFORM17 We consider the following two cases:1. Supposex Ct+2 > 1, then x Ct+2 = Π F (x Ct+2) = min{x Ct+2, 1} = 1 (note that in one-dimension, Π F, √ Vt = Π F is the simple Euclidean projection). After the (Ct + 2)-th update, we have: DISPLAYFORM18 The last inequality follows from Equation. The first inequality follows from DISPLAYFORM19 2. Supposex Ct+2 ≤ 1, then after the (Ct + 2)-th update, similar to Equation FORMULA37, we have: DISPLAYFORM20 In both cases,x Ct+3 ≥ 0, which translates to x Ct+3 =x Ct+3 ≥ 0. Furthermore, since gradients ∇f i (x) = 0 when i mod C = 1 or 2, we have DISPLAYFORM21 Therefore, given x Ct +1 = 0, it holds for all k ∈ N, k ≥ Ct + 1 by the principle of mathematical induction. Thus, we have DISPLAYFORM22 where k ∈ N, k ≥ t. Therefore, when t ≥ t, for every C steps, ADAM suffers a regret of at least 1. More specifically, R T ≥ (T −t)/C. Thus, R T /T 0 as T → ∞, which completes the proof. Theorem 2 generalizes the optimization setting used in Theorem 1. We notice that the example proposed by BID14 in their Appendix B already satisfies the constraints listed in Theorem 2.Here we provide the setting of the example for completeness. Proof. Consider the setting where f t are linear functions and F = [−1, 1]. In particular, we define the following function sequence: DISPLAYFORM0 where C ∈ N, C mod 2 = 0 satisfies the following: DISPLAYFORM1 where γ = β 1 / √ β 2 < 1. It is not hard to see that these conditions hold for large constant C that depends on β 1 and β 2. According to the proof given by BID14 in their Appendix B, in such a setting R T /T 0 as T → ∞, which completes the proof. The example proposed by BID14 in their Appendix C already satisfies the constraints listed in Theorem 3. Here we provide the setting of the example for completeness. Proof. Let δ be an arbitrary small positive constant. Consider the following one dimensional stochastic optimization setting over the domain [−1, 1]. At each time step t, the function f t (x) is chosen as follows: DISPLAYFORM0 where C is a large constant that depends on β 1, β 2 and δ. The expected function is F (x) = δx. Thus the optimal point over [−1, 1] is x * = −1. The step taken by ADAM is DISPLAYFORM1 According to the proof given by BID14 in their Appendix C, there exists a large enough C such that E[∆ t] ≥ 0, which then implies that the ADAM's step keep drifting away from the optimal solution x * = −1. Note that there is no limitation of the initial step size α by now. Therefore, we complete the proof. Proof. Let x * = arg min x∈F T t=1 f t (x), which exists since F is closed and convex. We begin with the following observation: DISPLAYFORM0 Using Lemma 1 with u 1 = x t+1 and u 2 = x *, we have the following: DISPLAYFORM1 Rearranging the above inequality, we have DISPLAYFORM2 DISPLAYFORM3 The second inequality use the fact that β 1t ≤ β 1 < 1. In order to further simplify the bound in Equation FORMULA17, we need to use telescopic sum. We observe that, by definition of η t, we have DISPLAYFORM4 t−1,i. Using the D ∞ bound on the feasible region and making use of the above property in Equation, we have DISPLAYFORM5 The equality follows from simple telescopic sum, which yields the desired . It is easy to see that the regret of ADABOUND is upper bounded by O(√ T). Theorem 5. Let {x t} and {v t} be the sequences obtained from Algorithm 3, β 1 = β 11, β 1t ≤ β 1 for all t ∈ [T] and β 1 / √ β 2 < 1. Suppose η l (t + 1) ≥ η l (t) > 0, η u (t + 1) ≤ η u (t), η l (t) → α * as t → ∞, η u (t) → α * as t → ∞, L ∞ = η l and R ∞ = η u. Assume that x − y ∞ ≤ D ∞ for all x, y ∈ F and ∇f t (x) ≤ G 2 for all t ∈ [T] and x ∈ F. For x t generated using the ADABOUND algorithm, we have the following bound on the regret We further directly compare the performance between SGDM and ADABOUND with each α (or α *). The are shown in Figure 7. We can see that ADABOUND outperforms SGDM for all the step sizes. Since the form of bound functions has minor impact on the performance of ADABOUND, it is likely to beat SGDM even without carefully tuning the hyperparameters. DISPLAYFORM0 To summarize, the form of bound functions does not much influence the final performance of the methods. In other words, ADABOUND is not sensitive to its hyperparameters. Moreover, it can achieve a higher or similar performance to SGDM even if it is not carefully fine-tuned. Therefore, we can expect a better performance by using ADABOUND regardless of the choice of bound functions. Here we provide an empirical study on the evolution of learning rates of ADABOUND over time. We conduct an experiment using ResNet-34 model on CIFAR-10 dataset with the same settings in Section 5. We randomly choose two layers in the network. For each layer, the learning rates of its parameters are recorded at each time step. We pick the min/median/max values of the learning rates in each layer and plot them against epochs in FIG4.We can see that the learning rates increase rapidly in the early stage of training, then after a few epochs its max/median values gradually decrease over time, and finally converge to the final step size. The increasing at the beginning is due to the property of the exponential moving average of φ t of ADAM, while the gradually decreasing indicates the transition from ADAM to SGD.
Novel variants of optimization methods that combine the benefits of both adaptive and non-adaptive methods.
984
scitldr
An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The ing algorithm, GenDICE, is straightforward and effective. We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation. Estimation of quantities defined by the stationary distribution of a Markov chain lies at the heart of many scientific and engineering problems. Famously, the steady-state distribution of a random walk on the World Wide Web provides the foundation of the PageRank algorithm . In many areas of machine learning, Markov chain Monte Carlo (MCMC) methods are used to conduct approximate Bayesian inference by considering Markov chains whose equilibrium distribution is a desired posterior . An example from engineering is queueing theory, where the queue lengths and waiting time under the limiting distribution have been extensively studied . As we will also see below, stationary distribution quantities are of fundamental importance in reinforcement learning (RL) (e.g.,). Classical algorithms for estimating stationary distribution quantities rely on the ability to sample next states from the current state by directly interacting with the environment (as in on-line RL or MCMC), or even require the transition probability distribution to be given explicitly (as in PageRank). Unfortunately, these classical approaches are inapplicable when direct access to the environment is not available, which is often the case in practice. There are many practical scenarios where a collection of sampled trajectories is available, having been collected off-line by an external mechanism that chose states and recorded the subsequent next states. Given such data, we still wish to estimate a stationary quantity. One important example is off-policy policy evaluation in RL, where we wish to estimate the value of a policy different from that used to collect experience. Another example is off-line PageRank (OPR), where we seek to estimate the relative importance of webpages given a sample of the web graph. Motivated by the importance of these off-line scenarios, and by the inapplicability of classical methods, we study the problem of off-line estimation of stationary values via a stationary distribution corrector. Instead of having access to the transition probabilities or a next-state sampler, we assume only access to a fixed sample of state transitions, where states have been sampled from an unknown distribution and next-states are sampled according to the Markov chain's transition operator. This off-line setting is distinct from that considered by most MCMC or on-line RL methods, where it is assumed that new observations can be continually sampled by demand from the environment. The off-line setting is indeed more challenging than its more traditional on-line counterpart, given that one must infer an asymptotic quantity from finite data. Nevertheless, we develop techniques that still allow consistent estimation under general conditions, and provide effective estimates in practice. The main contributions of this work are: • We formalize the problem of off-line estimation of stationary quantities, which captures a wide range of practical applications. • We propose a novel stationary distribution estimator, GenDICE, for this task. The ing algorithm is based on a new dual embedding formulation for divergence minimization, with a carefully designed mechanism that explicitly eliminates degenerate solutions. • We theoretically establish consistency and other statistical properties of GenDICE, and empirically demonstrate that it achieves significant improvements on several behavior-agnostic offpolicy evaluation benchmarks and an off-line version of PageRank. The methods we develop in this paper fundamentally extend recent work in off-policy policy evaluation by introducing a new formulation that leads to a more general, and as we will show, more effective estimation method. We first introduce off-line PageRank (OPR) and off-policy policy evaluation (OPE) as two motivating domains, where the goal is to estimate stationary quantities given only off-line access to a set of sampled transitions from an environment. The celebrated PageRank algorithm defines the ranking of a web page in terms of its asymptotic visitation probability under a random walk on the (augmented) directed graph specified by the hyperlinks. If we denote the World Wide Web by a directed graph G = (V, E) with vertices (web pages) v ∈ V and edges (hyperlinks) (v, u) ∈ E, PageRank considers the random walk defined by the Markov transition operator v → u: P (u|v) = (1−η) where |v| denotes the out-degree of vertex v and η ∈ is a probability of "teleporting" to any page uniformly. Define d t (v):= P (s t = v|s 0 ∼ µ 0, ∀i < t, s i+1 ∼ P(·|s i)), where µ 0 is the initial distribution over vertices, then the original PageRank algorithm explicitly iterates for the limit The classical version of this problem is solved by tabular methods that simulate Equation 1. However, we are interested in a more scalable off-line version of the problem where the transition model is not explicitly given. Instead, consider estimating the rank of a particular web page v from a large web graph, given only a sample D = {(v, u) from a random walk on G as specified above. We would still like to estimate d(v) based on this data. First, note that if one knew the distribution p by which any vertex v appeared in D, the target quantity could be re-expressed by a simple importance ratio d Policy Evaluation (Preliminaries) An important generalization of this stationary value estimation problem arises in RL in the form of policy evaluation. Consider a Markov Decision Process (MDP) M = S, A, P, R, γ, µ 0 , where S is a state space, A is an action space, P (s |s, a) denotes the transition dynamics, R is a reward function, γ ∈ is a discounted factor, and µ 0 is the initial state distribution. Given a policy, which chooses actions in any state s according to the probability distribution π(·|s), a trajectory β = (s 0, a 0, r 0, s 1, a 1, r 1, . . .) is generated by first sampling the initial state s 0 ∼ µ 0, and then for t ≥ 0, a t ∼ π(·|s t), r t ∼ R(s t, a t), and s t+1 ∼ P(·|s t, a t). The value of a policy π is the expected per-step reward defined as: Here the expectation is taken with respect to the randomness in the state-action pair P (s |s, a) π (a |s) and the reward R (s t, a t). Without loss of generality, we assume the limit exists for the average case, and hence R(π) is finite. Behavior-agnostic Off-Policy Evaluation (OPE) A natural version of policy evaluation that often arises in practice is to estimate, where p (s, a) is an unknown distribution induced by multiple unknown behavior policies. This problem is different from the classical form of OPE, where it is assumed that a known behavior policy π b is used to collect transitions; in the behavior-agnostic scenario we are considering here, standard importance sampling (IS) estimators cannot be applied. Let d π t (s, a) = P (s t = s, a t = a|s 0 ∼ µ 0, ∀i < t, a i ∼ π (·|s i), s i+1 ∼ P(·|s i, a i)). The stationary distribution can then be defined as From this definition note that R(π) and R γ (π) can be equivalently re-expressed as Here we see once again that if we had the correction ratio function τ (s, a) = is an empirical estimate of p (s, a). In this way, the behavior-agnostic OPE problem can be reduced to estimating the correction ratio function τ, as above. We note that and also exploit Equation 5 to reduce OPE to stationary distribution correction, but these prior works are distinct from the current proposal in different ways. First, the inverse propensity score (IPS) method of assumes the transitions are sampled from a single behavior policy, which must be known beforehand; hence that approach is not applicable in behavior-agnostic OPE setting. Second, the recent DualDICE algorithm is also a behavior-agnostic OPE estimator, but its derivation relies on a change-of-variable trick that is only valid for γ < 1. This previous formulation becomes unstable when γ → 1, as shown in Section 6 and Appendix E. The behavior-agnostic OPE estimator we derive below in Section 3 is applicable both when γ = 1 and γ ∈. This connection is why we name the new estimator GenDICE, for GENeralized stationary DIstribution Correction Estimation. As noted, there are important estimation problems in the Markov chain and MDP settings that can be recast as estimating a stationary distribution correction ratio. We first outline the conditions that characterize the correction ratio function τ, upon which we construct the objective for the GenDICE estimator, and design efficient algorithm for optimization. We will develop our approach for the more general MDP setting, with the understanding that all the methods and can be easily specialized to the Markov chain setting. The stationary distribution µ π γ defined in Equation 4 can also be characterized via At first glance, this equation shares a superficial similarity to the Bellman equation, but there is a fundamental difference. The Bellman operator recursively integrates out future (s, a) pairs to characterize a current pair (s, a) value, whereas the distribution operator T defined in Equation 6 operates in the reverse temporal direction. When γ ≤ 1, Equation 6 always has a fixed-point solution. For γ = 1, in the discrete case, the fixedpoint exists as long as T is ergodic; in the continuous case, the conditions for fixed-point existence become more complicated and beyond the scope of this paper. The development below is based on a divergence D and the following default assumption. Assumption 1 (Markov chain regularity) For the given target policy π, the ing state-action transition operator T has a unique stationary distribution µ that satisfies D(T • µ µ) = 0. In the behavior-agnostic setting we consider, one does not have direct access to P for element-wise evaluation or sampling, but instead is given a fixed set of samples from P (s |s, a) p (s, a) with respect to some distribution p (s, a) over S × A. Define T p γ,µ0 to be a mixture of µ 0 π and T p; i.e., let Obviously, conditioning on (s, a, s) one could easily sample a ∼ π (a |s) to form (s, a, s, a) ∼ T p ((s, a), (s, a)); similarly, a sample (s, a) ∼ µ 0 (s) π (a |s) could be formed from s. Mixing such samples with probability γ and 1 − γ respectively yields a sample (s, a, s, a) ∼ T p γ,µ0 ((s, a), (s, a)). Based on these observations, the stationary condition for the ratio from Equation 6 can be re-expressed in terms of T p γ,µ0 as where is the correction ratio function we seek to estimate. One natural approach to estimating τ * is to match the LHS and RHS of Equation 8 with respect to some divergence D (· ·) over the empirical samples. That is, we consider estimating τ * by solving the optimization problem min Although this forms the basis of our approach, there are two severe issues with this naive formulation that first need to be rectified: • τ, which in general involves an intractable integral. Thus, evaluation of the exact objective is intractable, and neglects the assumption that we only have access to samples from T p γ,µ0 and are not able to evaluate it at arbitrary points. We address each of these two issues in a principled manner. To avoid degenerate solutions when γ = 1, we ensure that the solution is a proper density ratio; that is, the property τ ∈ Ξ:= {τ (·) ≥ 0, E p [τ] = 1} must be true of any τ that is a ratio of some density to p. This provides an additional constraint that we add to the optimization formulation min With this additional constraint, it is obvious that the trivial solution τ (s, a) = 0 is eliminated as an infeasible point of Eqn, along with other degenerate solutions τ (s, a) = cτ * (s, a) with c = 1. Unfortunately, exactly solving an optimization with expectation constraints is very complicated in general , particularly given a nonlinear parameterization for τ. The penalty method provides a much simpler alternative, where a sequence of regularized problems are solved min with λ increasing. The drawback of the penalty method is that it generally requires λ → ∞ to ensure the strict feasibility, which is still impractical, especially in stochastic gradient descent. The infinite λ may induce unbounded variance in the gradient estimator, and thus, divergence in optimization. However, by exploiting the special structure of the solution sets to Equation 11, we can show that, remarkably, it is unnecessary to increase λ. Theorem 1 For γ ∈ and any λ > 0, the solution to Equation 11 is given by τ * (s, a) = u(s,a) p(s,a). The detailed proof for Theorem 1 is given in Appendix A.1. By Theorem 1, we can estimate the desired correction ratio function τ * by solving only one optimization with an arbitrary λ > 0. The optimization in Equation 11 involves the integrals T p γ,µ0 • τ and E p [τ] inside nonlinear loss functions, hence appears difficult to solve. Moreover, obtaining unbiased gradients with a naive approach requires double sampling . Instead, we bypass both difficulties by applying a dual embedding technique (; . In particular, we assume the divergence D is in the form of an f -divergence ds da where φ (·): R + → R is a convex, lower-semicontinuous function with φ = 0. Plugging this into J (τ) in Equation 11 we can easily check the convexity of the objective Theorem 2 For an f -divergence with valid φ defining D φ, the objective J (τ) is convex w.r.t. τ. The detailed proof is provided in Appendix A.2. Recall that a suitable convex function can be represented as φ (x) = max f x·f −φ * (f), where φ * is the Fenchel conjugate of φ (·). In particular, we have the representation 1 2 x 2 = max u ux − 1 2 u 2, which allows us to re-express the objective as Applying the interchangeability principle , one can replace the inner max in the first term over scalar f to maximize over a function f (·, ·): This yields the main optimization formulation, which avoids the aforementioned difficulties and is well-suited for practical optimization as discussed in Section 3.4. In addition to f -divergence, the proposed estimator Equation 11 is compatible with other divergences, such as the integral probability metrics (IPM) (Müller, 1997;), while retaining consistency. Based on the definition of the IPM, these divergences directly lead to min-max optimizations similar to Equation 13 with the identity function as φ * (·) and different feasible sets for the dual functions. Specifically, maximum mean discrepancy (MMD) requires f H k ≤ 1 where H k denotes the RKHS with kernel k; the Dudley metric requires f BL ≤ 1 where f BL:= f ∞ + ∇f 2; and Wasserstein distance requires ∇f 2 ≤ 1. These additional requirements on the dual function might incur some extra difficulty in practice. For example, with Wasserstein distance and the Dudley metric, we might need to include an extra gradient penalty , which requires additional computation to take the gradient through a gradient. Meanwhile, the consistency of the surrogate loss under regularization is not clear. For MMD, we can obtain a closed-form solution for the dual function, which saves the cost of the inner optimization , but with the tradeoff of requiring two independent samples in each outer optimization update. Moreover, MMD relies on the condition that the dual function lies in some RKHS, which introduces additional kernel parameters to be tuned and in practice may not be sufficiently flexible compared to neural networks. We have derived a consistent stationary distribution correction estimator in the form of a min-max saddle point optimization Equation 13. Here, we present a practical instantiation of GenDICE with a concrete objective and parametrization. We choose the χ 2 -divergence, which is an f -divergence with φ (x) = (x − 1) 2 and φ There two major reasons for adopting χ 2 -divergence: i) In the behavior-agnostic OPE problem, we mainly use the ratio correction function for estimating, which is an expectation. Recall that the error between the estimate and ground-truth can then be bounded by total variation, which is a lower bound of χ 2 -divergence. ii) For the alternative divergences, the conjugate of the KL-divergence involves exp (·), which may lead to instability in optimization; while the IPM variants introduce extra constraints on dual function, which may be difficult to be optimized. The conjugate function of χ 2 -divergence en-joys suitable numerical properties and provides squared regularization. We have provided an additional empirical ablation study that investigates the alternative divergences in Appendix E.3. To parameterize the correction ratio τ and dual function f we use neural networks, τ (s, a) = nn wτ (s, a) and f (s, a) = nn w f (s, a), where w τ and w f denotes the parameters of τ and f respectively. Since the optimization requires τ to be non-negative, we add an extra positive neuron, such as exp (·), log (1 + exp (·)) or (·) 2 at the final layer of nn wτ (s, a). We empirically compare the different positive neurons in Section 6.3. For these representations, and unbiased gradient estimator ∇ (wτ,u,w f) J (τ, u, f) can be obtained straightforwardly, as shown in Appendix B. This allows us to apply stochastic gradient descent to solve the saddle-point problem Equation 14 in a scalable manner, as illustrated in Algorithm 1. We provide a theoretical analysis for the proposed GenDICE algorithm, following a similar learning setting and assumptions to . The main is summarized in the following theorem. A formal statement, together with the proof, is given in Appendix C. Theorem 3 (Informal) Under mild conditions, with learnable F and H, the error in the objective between the GenDICE estimate,τ, to the solution τ where E [·] is w.r.t. the randomness in D and in the optimization algorithms, opt is the optimization error, and approx (F, H) is the approximation induced by (F, H) for parametrization of (τ, f). The theorem shows that the suboptimality of GenDICE's solution, measured in terms of the objective function value, can be decomposed into three terms: the approximation error approx, which is controlled by the representation flexibility of function classes; the estimation error due to sample randomness, which decays at the order of 1/ √ N; and the optimization error, which arises from the suboptimality of the solution found by the optimization algorithm. As discussed in Appendix C, in special cases, this suboptimality can be bounded below by a divergence betweenτ and τ *, and therefore directly bounds the error in the estimated policy value. There is also a tradeoff between these three error terms. With more flexible function classes (e.g., neural networks) for F and H, the approximation error approx becomes smaller. However, it may increase the estimation error (through the constant in front of 1/ √ N) and the optimization error (by solving a harder optimization problem). On the other hand, if F and H are linearly parameterized, estimation and optimization errors tend to be smaller and can often be upper-bounded explicitly in Appendix C.3. However, the corresponding approximation error will be larger. Off-policy evaluation with importance sampling (IS) has has been explored in the contextual bandits (; Dudík et al., 2011;), and episodic RL settings , achieving many empirical successes (e.g., ; Dudík et al., 2011;). Unfortunately, IS-based methods suffer from exponential variance in long-horizon problems, known as the "curse of horizon" . A few variancereduction techniques have been introduced, but still cannot eliminate this fundamental issue (; ;). By rewriting the accumulated reward as an expectation w.r.t. a stationary distribution,; recast OPE as estimating a correction ratio function, which significantly alleviates variance. However, these methods still require the off-policy data to be collected by a single and known behavior policy, which restricts their practical applicability. The only published algorithm in the literature, to the best of our knowledge, that solves agnostic-behavior off-policy evaluation is DualDICE . However, DualDICE was developed for discounted problems and its become unstable when the discount factor approaches 1 (see below). By contrast, GenDICE can cope with the more challenging problem of undiscounted reward estimation in the general behavior-agnostic setting. Note that standard model-based methods , which estimate the transition and reward models directly then calculate the expected reward based on the learned model, are also applicable to the behavior-agnostic setting considered here. Unfortunately, model-based methods typically rely heavily on modeling assumptions about rewards and transition dynamics. In practice, these assumptions do not always hold, and the evaluation can become unreliable. For more related work on MCMC, density ratio estimation and PageRank, please refer to Appendix F. In this section, we evaluate GenDICE on OPE and OPR problems. For OPE, we use one or multiple behavior policies to collect a fixed number of trajectories at some fixed trajectory length. This data is used to recover a correction ratio function for a target policy π that is then used to estimate the average reward in two different settings: i) average reward; and ii) discounted reward. In both settings, we compare with a model-based approach and step-wise weighted IS . We also compare to (referred to as "IPS" here) in the Taxi domain with a learned behavior policy 1. We specifically compare to DualDICE in the discounted reward setting, which is a direct and current state-of-the-art baseline. For OPR, the main comparison is with the model-based method, where the transition operator is empirically estimated and stationary distribution recovered via an exact solver. We validate GenDICE in both tabular and continuous cases, and perform an ablation study to further demonstrate its effectiveness. All are based on 20 random seeds, with mean and standard deviation plotted. Our code will be publicly available for reproduction. Offline PageRank on Graphs One direct application of GenDICE is off-line PageRank (OPR). We test GenDICE on a Barabasi-Albert (BA) graph (synthetic), and two realworld graphs, Cora and Citeseer. Details of the graphs are given in Appendix D. We use the log KL-divergence between estimated stationary distribution and the ground truth as the evaluation metric, with the ground truth computed by an exact solver based on the exact transition operator of the graphs. We compared GenDICE with modelbased methods in terms of the sample efficiency. From the in Figure 1, GenDICE outperforms the modelbased method when limited data is given. Even with 20k samples for a BA graph with 100 nodes, where a transition matrix has 10k entries, GenDICE still shows better performance in the offline setting. This is reasonable since Gen-DICE directly estimates the stationary distribution vector or ratio, while the model-based method needs to learn an entire transition matrix that has many more parameters. We use a similar taxi domain as in , where a grid size of 5 × 5 yields 2000 states in total (25 × 16 × 5, corresponding to 25 taxi locations, 16 passenger appearance status and 5 taxi status). We set the target policy to a final policy π after running tabular Q-learning for 1000 iterations, and set another policy π + after 950 iterations as the base policy. The behavior policy is a mixture controlled by α as π b = (1 − α)π + απ +. For the model-based method, we use a tabular representation for the reward and transition functions, whose entries are estimated from behavior data. For IS and IPS, we fit a policy via behavior cloning to estimate the policy ratio. In this specific setting, our methods achieve better compared to IS, IPS and the model-based method. Interestingly, with longer horizons, IS cannot improve as much as other methods even with more data, while GenDICE consistently improve and achieves much better than the baselines. DualDICE only works with γ < 1. GenDICE is more stable than DualDICE when γ becomes larger (close to 1), while still showing competitive performance for smaller discount factors γ. We further test our method for OPE on three control tasks: a discrete-control task Cartpole and two continuous-control tasks Reacher and HalfCheetah. In these tasks, observations (or states) are continuous, thus we use neural network function approximators and stochastic optimization. Since DualDICE has shown the state-of-the-art performance on discounted OPE, we mainly compare with it in the discounted reward case. We also compare to IS with a learned policy via behavior cloning and a neural model-based method, similar to the tabular case, but with neural network as the function approximator. All neural networks are feed-forward with two hidden layers of dimension 64 and tanh activations. More details can be found in Appendix D. Due to limited space, we put the discrete control in Appendix E and focus on the more challenging continuous control tasks. Here, the good performance of IS and model-based methods in Section 6.1 quickly deteriorates as the environment becomes complex, i.e., with a continuous action space. Note that GenDICE is able to maintain good performance in this scenario, even when using function approximation and stochastic optimization. This is reasonable because of the difficulty of fitting to the coupled policy-environment dynamics with a continuous action space. Here we also empirically validate GenDICE with off-policy data collected by multiple policies. As illustrated in Figure 3, all methods perform better with longer trajectory length or more trajectories. When α becomes larger, i.e., the behavior policies are closer to the target policy, all methods performs better, as expected. Here, GenDICE demonstrates good performance both on averagereward and discounted reward cases in different settings. The right two figures in each row show the log MSE curve versus optimization steps, where GenDICE achieves the smallest loss. In the discounted reward case, GenDICE shows significantly better and more stable performance than the strong baseline, DualDICE. Figure 4 also shows better performance of GenDICE than all baselines in the more challenging HalfCheetah domain. Each plot in the second row shows the average reward case. Finally, we conduct an ablation study on GenDICE to study its robustness and implementation sensitivities. We investigate the effects of learning rate, activation function, discount factor, and the specifically designed ratio constraint. We further demonstrate the effect of the choice of divergences and the penalty weight in Appendix E.3. Model-Based Importance Sampling DualDICE GenDICE (ours) Figure 4: Results on HalfCheetah. Plots from left to the right show the log MSE of estimated average per-step reward over different truncated lengths, numbers of trajectories, and behavior policies in discounted and average reward cases. Effects of the Learning Rate Since we are using neural network as the function approximator, and stochastic optimization, it is necessary to show sensitivity to the learning rate with {0.0001, 0.0003, 0.001, 0.003}, with in Figure 5. When α = 0.33, i.e., the OPE tasks are relatively easier and GenDICE obtains better at all learning rate settings. However, when α = 0.0, i.e., the estimation becomes more difficult and only GenDICE only obtains reasonable with the larger learning rate. Generally, this ablation study shows that the proposed method is not sensitive to the learning rate, and is easy to train. We further investigate the effects of the activation function on the last layer, which ensure the non-negative outputs required for the ratio. To better understand which activation function will lead to stable trainig for the neural correction estimator, we empirically compare using i) (·) 2; ii) log(1 + exp(·)); and iii) exp(·). In practice, we use the (·) 2 since it achieves low variance and better performance in most cases, as shown in Figure 5. We vary γ ∈ {0.95, 0.99, 0.995, 0.999, 1.0} to probe the sensitivity of GenDICE. Specifically, we compare to DualDICE, and find that GenDICE is stable, while DualDICE becomes unstable when the γ becomes large, as shown in Figure 6. GenDICE is also more general than DualDICE, as it can be applied to both the average and discounted reward cases. In Section 3, we highlighted the importance of the ratio constraint. Here we investigate the trivial solution issue without the constraint. The in Figure 6 demonstrate the necessity of adding the constraint penalty, since a trivial solution prevents an accurate corrector from being recovered (green line in left two figures). In this paper, we proposed a novel algorithm GenDICE for general stationary distribution correction estimation, which can handle both the discounted and average stationary distribution given multiple behavior-agnostic samples. Empirical on off-policy evaluation and offline PageRank show the superiority of proposed method over the existing state-of-the-art methods. the existence of the stationary distribution. Our discussion is all based on this assumption. Assumption 1 Under the target policy, the ed state-action transition operator T has a unique stationary distribution in terms of the divergence D (·||·). If the total variation divergence is selected, the Assumption 1 requires the transition operator should be ergodic, as discussed in. Theorem 1 For arbitrary λ > 0, the solution to the optimization Eqn is Proof For γ ∈, there is not degenerate solutions to D T p γ,µ0 • τ ||p · τ. The optimal solution is a density ratio. Therefore, the extra penalty E p(x) [τ (x)] − 1 2 does not affect the optimality for ∀λ > 0. negative, and the the density ratio p(x) leads to zero for both terms. Then, the density ratio is a solution to J (τ). For any other non-negative function τ (x) ≥ 0, if it is the optimal solution to J (τ), then, we have We denote µ (x) = p (x) τ (x), which is clearly a density function. Then, the optimal conditions in Equation 15 imply or equivalently, µ is the stationary distribution of T. We have thus shown the optimal τ (x) = is the target density ratio. Proof Since the φ is convex, we consider the Fenchel dual representation of the f -divergence 2 is also convex, which concludes the proof. Inputs: Convex function φ and its Fenchel conjugate φ, initial state s 0 ∼ µ 0, target policy π, distribution corrector nn wτ (·, ·), nn w f (·, ·), constraint scalar u, learning rates η τ, η f, η u, number of iterations K, batch size B. Sample actions a end for Return nn wτ. We provide the unbiased gradient estimator for ∇ wτ,u,w f J (τ, u, f) in Eqn below: Then, we have the psuedo code which applies SGD for solving Eqn. For convenience, we repeat here the notation defined in the main text. The saddle-point reformulation of the objective function of GenDICE is: To avoid the numerical infinity in D φ (·||·), we induced the bounded version as is still a valid divergence, and therefore the optimal solution τ * is still the stationary density ratio p(x). We denote the J (τ, µ, f) as the empirical surrogate of with optimal (f *, u *), and We apply some optimization algorithm for J (τ, u, f) over space (H, F, R), leading to the output τ, u, f. Under Assumption 2, we need only consider τ ∞ ≤ C, then, the corresponding dual u = E p (τ) − 1 ⇒ u ∈ U:= {|u| ≤ (C + 1)}. We choose the φ * (·) is a κ-Lipschitz continuous, then, the We consider the error betweenτ and τ * using standard arguments Remark: In some special cases, the suboptimality also implies the distance betweenτ and τ *. Specifically,for γ = 1, if the transition operator P π can be represented as P π = QΛQ −1 where Q denotes the (countable) eigenfunctions and Λ denotes the diagonal matrix with eigenvalues, the largest of which is 1. We consider φ (·) as identity and f ∈ F:= span (Q), f p,2 ≤ 1, then the d (τ, τ *) will bounded from below by a metric between τ and τ *. Particularly, we have Rewrite τ = ατ * + ζ, where ζ ∈ span Q \τ *, then Recall the optimality of τ We start with the following error decomposition: • For 1, we have. We consider the terms one-by-one. By definition, we have which is induced by introducing F for dual approximation. For the third term where we define est: Therefore, we can now bound 1 as We consider the terms from right to left. For the term J (τ * H) − J (τ *), we have, which is induced by restricting the function space to H. The second term is nonpositive, due to the optimality of (u *, f *). The final inequality comes from the fact that where the second term is nonpositive, thanks to the optimality ofτ * H. Finally, for the term J (τ * H) − J (τ * H), using the same argument in Equation 21, we have Therefore, we can bound 2 by 2 ≤ C φ,C,λ approx (H) + C P π,κ,λ (F) + 2 est. In sum, we have d (τ, τ *) ≤ 4 est +ˆ opt + 2C P π,κ,λ approx (F) + C φ,C,λ approx (H). In the following sections, we will bound the est andˆ opt. In this section, we analyze the statistical error We mainly focus on the batch RL setting with, which has been studied by previous authors (e.g., ;). However, as discussed in the literature (; ; ;), using the blocking technique of , the statistical error provided here can be generalized to β-mixing samples in a single sample path. We omit this generalization for the sake of expositional simplicity. To bound the est, we follow similar arguments by; via the covering number. For completeness, the definition is given below. The Pollard's tail inequality bounds the maximum deviation via the covering number of a function class: are i.i.d. samples from some distribution. Then, for any given > 0, The covering number can then be bounded in terms of the function class's pseudo-dimension: Lemma 5 (, Corollary 3) For any set X, any points x 1:N ∈ X N, any class F of functions on X taking values in [0, M] with pseudo-dimension D F < ∞, and any > 0, The statistical error est can be bounded using these lemmas. Lemma 6 (Stochastic error) Under the Assumption 2, if φ * is κ-Lipschitz continuous and the psuedo-dimension of H and F are finite, with probability at least 1 − δ, we have Proof The proof works by verifying the conditions in Lemma 4 and computing the covering number. 2 u 2, we will apply Lemma 4 with Z = Ω × Ω, Z i = (x i, x i), and G = h H×F ×U. We check the boundedness of h ζ,u,f (x, x). Based on Assumption 2, we only consider the τ ∈ H and u ∈ U bounded by C and C + 1. We also rectify the f ∞ ≤ C. Then, we can bound the h ∞: where C φ = max t∈[−C,C] −φ * (t). Thus, by Lemma 4, we have Next, we check the covering number of G. Firstly, we bound the distance in G, Denote the pseudo-dimension of H and F as D H and D F, respectively, we have, we obtain the bound for the statistical error: In this section, we investigate the optimization error. Notice our estimator min τ ∈H max f ∈F,u∈U J (τ, u, f) is compatible with different parametrizations for (H, F) and different optimization algorithms, the optimization error will be different. For the general neural network for (τ, f), although there are several progress recently (; ;) about the convergence to a stationary point or local minimum, it remains a largely open problem to quantify the optimization error, which is out of the scope of this paper. Here, we mainly discuss the convergence rate with tabular, linear and kernel parametrization for (τ, f). Particularly, we consider the linear parametrization particularly, i.e., τ (x) = σ w τ ψ (x), f (x) = w f ψ (x), and σ (·): R → R + is convex. There are many choices of the σ (·), e.g., exp (·), log (1 + exp (·)) and (·) 2. Obviously, even with such nonlinear mapping, the J (τ, u, f) is still convex-concave w.r.t (w τ, w f, u) by the convex composition rule. We can bound theˆ opt by the primal-dual gap gap: With vanilla SGD, we have, where T is the optimization steps ., where the E [·] is taken w.r.t. randomness in SGD. We are now ready to state the main theorm in a precise way: Under Assumptions 2 and 1, the stationary distribution µ exists, i.e., *, and the psuedo-dimension of H and F are finite, the error between the GenDICE estimate to τ where E [·] is w.r.t. the randomness in sample D and in the optimization algorithms. opt is the optimization error, and approx (F, H) is the approximation induced by (F, H) for parametrization of (τ, f). Proof We have the total error as where approx:= 2C T,κ,λ approx (F) + C φ,C,λ approx (H). For opt, we can apply the for SGD in Appendix C.3. We can bound the E [est] by Lemma 6. Specifically, we have Plug all these bounds into Equation 25, we achieve the . For the Taxi domain, we follow the same protocol as used in. The behavior and target policies are also taken from (referred in their work as the behavior policy for α = 0). We use a similar taxi domain, where a grid size of 5×5 yields 2000 states in total (25×16×5, cor-responding to 25 taxi locations, 16 passenger appearance status and 5 taxi status). We set our target policy as the final policy π * after running Q-learning for 1000 iterations, and set another policy π + after 950 iterations as our base policy. The behavior policy is a mixture policy controlled by α as π = (1 − α)π * + απ +, i.e., the larger α is, the behavior policy is more close to the target policy. In this setting, we solve for the optimal stationary ratio τ exactly using matrix operations. perform a similar exact solve for |S| variables µ(s), for better comparison we also perform our exact solve with respect to |S| variables τ (s). Specifically, the final objective of importance sampling will require knowledge of the importance weights µ(a|s)/p(a|s). For offline PageRank, the graph statistics are illustrated in Table 1, and the degree statistics and graph visualization are shown in Figure 7. For the BarabasiAlbert (BA) Graph, it begins with an initial connected network of m 0 nodes in the network. Each new node is connected to m ≤ m 0 existing nodes with a probability that is proportional to the number of links that the existing nodes already have. Intuitively, heavily linked nodes ('hubs') tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a'preference' to attach themselves to the already heavily linked nodes. For two real-world graphs, it is built upon the real-world citation networks. In our experiments, the weights of the BA graph is randomly drawn from a standard Gaussian distribution with normalization to ensure the property of the transition matrix. The offline data is collected by a random walker on the graph, which consists the initial state and next state in a single trajectory. In experiments, we vary the number of off-policy samples to validate the effectiveness of GenDICE with limited offline samples provided. We use the Cartpole, Reacher and HalfCheetah tasks as given by OpenAI Gym. In importance sampling, we learn a neural network policy via behavior cloning, and use its probabilities for computing importance weights π * (a|s)/π(a|s). All neural networks are feed-forward with two hidden layers of dimension 64 and tanh activations. Discrete Control Tasks We modify the Cartpole task to be infinite horizon: We use the same dynamics as in the original task but change the reward to be −1 if the original task returns a termination (when the pole falls below some threshold) and 1 otherwise. We train a policy on this task with standard Deep Q-Learning until convergence. We then define the target policy π * as a weighted combination of this pre-trained policy (weight 0.7) and a uniformly random policy (weight 0.3). The behavior policy π for a specific 0 ≤ α ≤ 1 is taken to be a weighted combination of the pre-trained policy (weight 0.55 + 0.15α) and a uniformly random policy (weight 0.45 − 0.15α). We train each stationary distribution correction estimation method using the Adam optimizer with batches of size 2048 and learning rates chosen using a hyperparameter search from {0.0001, 0.0003, 0.001, 0.003} and choose the best one as 0.0003. For the Reacher task, we train a deterministic policy until convergence via DDPG . We define the target policy π as a Gaussian with mean given by the pre-trained policy and standard deviation given by 0.1. The behavior policy π b for a specific 0 ≤ α ≤ 1 is taken to be a Gaussian with mean given by the pre-trained policy and standard deviation given by 0.4 − 0.3α. We train each stationary distribution correction estimation method using the Adam optimizer with batches of size 2048 and learning rates chosen using a hyperparameter search from {0.0001, 0.0003, 0.001, 0.003} and the optimal learning rate found was 0.003). For the HalfCheetah task, we also train a deterministic policy until convergence via DDPG . We define the target policy π as a Gaussian with mean given by the pre-trained policy and standard deviation given by 0.1. The behavior policy π b for a specific 0 ≤ α ≤ 1 is taken to be a Gaussian with mean given by the pre-trained policy and standard deviation given by 0.2 − 0.1α. We train each stationary distribution correction estimation method using the Adam optimizer with batches of size 2048 and learning rates chosen using a hyperparameter search from {0.0001, 0.0003, 0.001, 0.003} and the optimal learning rate found was 0.003. E.1 OPE FOR DISCRETE CONTROL On the discrete control task, we modify the Cartpole task to be infinite horizon: the original dynamics is used but with a modified reward function: the agent will receive −1 if the environment returns a termination (i.e., the pole falls below some threshold) and 1 otherwise. As shown in Figure 3, our method shows competitive with IS and Model-Based in average reward case, but our proposed method finally outperforms these two methods in terms of log MSE loss. Specifically, it is relatively difficult to fit a policy with data collected by multiple policies, which renders the poor performance of IS. In this section, we show more on the continuous control tasks, i.e., HalfCheetah and Reacher. Figure 9 shows the log MSE towards training steps, and GenDICE outperforms other baselines with different behavior policies. Figure 10 better illustrates how our method beat other baselines, and can accurately estimate the reward of the target policy. Besides, Figure 11 shows GenDICE gives better reward estimation of the target policy. In these figures, the left three figures show the performance with off-policy dataset collected by single behavior policy from more difficult to easier tasks. The right two figures show the , where off-policy dataset collected by multiple behavior policies. Figure 12 shows the ablation study in terms of estimated rewards. The left two figures shows the effects of different learning rate. When α = 0.33, i.e., the OPE tasks are relatively easier, GenDICE gets relatively good in all learning rate settings. However, when α = 0.0, i.e., the estimation becomes more difficult, only GenDICE in larger learning rate gets reasonable estimation. Interestingly, we can see with larger learning rates, the performance becomes better, and when learning rate is 0.001 with α = 0.0, the variance is very high, showing some cases the estimation becomes more accurate. The right three figures show different activation functions with different behavior policy. The square and softplus function works well; while the exponential function shows poor performance under some settings. In practice, we use the square function since its low variance and better performance in most cases. Model-Based Importance Sampling DualDICE GenDICE (ours) Oracle Figure 11: Results on HalfCheetah. Each plot in the first row shows the estimated average step reward over training and different behavior policies (higher α corresponds to a behavior policy closer to the target policy. Although any valid divergence between p · τ and T p γ,µ0 • τ in our estimator is consistent, which will always lead to the stationary distribution correction ratio asymptotically, and any λ > 0 will guarantee the normalization constraint, i.e., E p [τ] = 1, as we discussed in main text, different Figure 12: Results of ablation study with different learning rates and activation functions. The plots show the estimated average step reward over training and different behavior policies. choices of the divergences and λ may incur difficulty in the numerical optimization procedure. In this section, we investigate the empirical effects of the choice of f -divergence and IPM, and the weight of constrant regularization λ. To avoid the effects of other factors in the estimator, e.g., function parametrization, we focus on the offline PageRank task on BA graph with 100 nodes and 10k offline samples. All the performances are evaluated with 20 random trials. We test the GenDICE with several other alternative divergences, e.g., Wasserstein-1 distance, Jensen-Shannon divergence, KL-divergence, Hellinger divergence, and MMD. To ensure the dual function to be 1-Lipchitz, we add the gradient penalty. We use a learned Gaussian kernel in MMD, similar to. As we can see in Figure 13 (a), with these different divergences, the proposed GenDICE estimator can always achieve significantly better performance compared with the model-based estimator, showing that the GenDICE estimator is compatible with many different divergences. Most of the divergences, with appropriate extra techniques to handle the difficulties in optimization and carefully tuning for extra parameters, can achieve similar performances, consistent with phenomena in the variants of GANs . However, KL-divergence is an outlier, performing noticeably worse, which might be caused by the ill-behaved exp (·) in its conjugate function. The χ 2 -divergence and JS-divergence are better, which achieve good performances with fewer parameters to be tuned. The effect of the penalty weight λ is illustrated the in Figure 13(b). We vary the λ ∈ [0.1, 5] with χ 2 -divergence in the GenDICE estimator. Within a large range of λ, the performances of the proposed GenDICE are quite consistent, which justifies Theorem 1. The penalty multiplies with λ. Therefore, with λ increases, the variance of the stochastic gradient estimator also increases, which explains the variance increasing in large λ in Figure 13(b). In practice, λ = 1 is a reasonable choice for general cases. Markov Chain Monte Carlo Classical MCMC aims at sampling from µ π by iteratively simulting from the transition operator. It requires continuous interaction with the transition operator and heavy computational cost to update many particles. Amor-tized SVGD and Adversarial MCMC alleviate this issue via combining with neural network, but they still interact with the transition operator directly, i.e., in an on-policy setting. The major difference of our GenDICE is the learning setting: we only access the off-policy dataset, and cannot sample from the transition operator. The proposed GenDICE leverages stationary density ratio estimation for approximating the stationary quantities, which distinct it from classical methods. Density ratio estimation is a fundamental tool in machine learning and much related work exists. Classical density ratio estimation includes moment matching (Gretton et al.), probabilistic classification , and ratio matching (; ;). These classical methods focus on estimating the ratio between two distributions with samples from both of them, while GenDICE estimates the density ratio to a stationary distribution of a transition operator, from which even one sample is difficult to obtain. developed a reverse-time RL framework for PageRank via solving a reverse Bellman equation, which is less sensitive to graph topology and shows faster adaptation with graph change. still considers the online manner, which is different with our OPR setting.
In this paper, we proposed a novel algorithm, GenDICE, for general stationary distribution correction estimation, which can handle both discounted and average off-policy evaluation on multiple behavior-agnostic samples.
985
scitldr
The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well. Neural networks are a powerful tool for solving classification problems. The power of these models is due in part to their expressiveness; they have many parameters that can be efficiently optimized to fit nearly any finite training set. However, the real power of neural network models comes from their ability to generalize; they often make accurate predictions on test data that were not seen during training, provided the test data is sampled from the same distribution as the training data. The generalization ability of neural networks is seemingly at odds with their expressiveness. Neural network training algorithms work by minimizing a loss function that measures model performance using only training data. Because of their flexibility, it is possible to find parameter configurations Figure 1: A minefield of bad minima: we train a neural net classifier and plot the iterates of SGD after each tenth epoch (red dots). We also plot locations of nearby "bad" minima with poor generalization (blue dots). We visualize these using t-SNE embedding. All blue dots achieve near perfect train accuracy, but with test accuracy below 53% (random chance is 50%). The final iterate of SGD (yellow star) also achieves perfect train accuracy, but with 98.5% test accuracy. Miraculously, SGD always finds its way through a landscape full of bad minima, and lands at a minimizer with excellent generalization. for neural networks that perfectly fit the training data and minimize the loss function while making mostly incorrect predictions on test data. Miraculously, commonly used optimizers reliably avoid such "bad" minima of the loss function, and succeed at finding "good" minima that generalize well. Our goal here is to develop an intuitive understanding of neural network generalization using visualizations and experiments rather than analysis. We begin with some experiments to understand why generalization is puzzling, and how over-parameterization impacts model behavior. Then, we explore how the "flatness" of minima correlates with generalization, and in particular try to understand why this correlation exists. We explore how the high dimensionality of parameter spaces biases optimizers towards landing in flat minima that generalize well. Finally, we present some counterfactual experiments to validate the intuition we develop. Code to reproduce experiments is available at https://github.com/genviz2019/genviz. Neural networks define a highly expressive model class. In fact, given enough parameters, a neural network can approximate virtually any function . But just because neural nets have the power to represent any function does not mean they have the power to learn any function from a finite amount of training data. Neural network classifiers are trained by minimizing a loss function that measures model performance using only training data. A standard classification loss has the form where p θ (x, y) is the probability that data sample x lies in class y according to a neural net with parameters θ, and D t is the training dataset of size |D t |. This loss is near zero when a model with parameters θ accurately classifies the training data. Over-parameterized neural networks (i.e., those with more parameters than training data) can represent arbitrary, even random, labeling functions on large datasets . As a , an optimizer can reliably fit an over-parameterized network to training data and achieve near zero loss (; We illustrate the difference between model fitting and generalization with an experiment. The CIFAR-10 training dataset contains 50,000 small images. We train two over-parameterized models on this dataset. The first is a neural network (ResNet-18) with 269,722 parameters (nearly 6× the number of training images). The second is a linear model with a feature set that includes pixel intensities as well as pair-wise products of pixels intensities. 1 This linear model has 298, 369 parameters, which is comparable to the neural network, and both are trained using SGD. On the left of Figure 2, we see that overparameterization causes both models to achieve perfect accuracy on training data. But the linear model achieves only 49% test accuracy, while ResNet-18 achieves 92%. The excellent performance of the neural network model raises the question of whether bad minima exist at all. Maybe deep networks generalize because bad minima are rare and lie far away from the region of parameter space where initialization takes place? We can confirm the existence of bad minima by incorporating a loss term that explicitly promotes poor generalization, by discouraging performance on unseen data drawn from the same distribution. We do this by minimizing where D t is the training set, and D d is a set of unseen examples sampled from the same distribution. D d could be obtained via a GAN or additional data collection (note that it is not the test set). Here, β parametrizes the amount of "anti-generalization" we wish to achieve. The first term in is the standard cross entropy loss on the training set D t, and is minimized when the training data are classified correctly. The second term is the reverse cross entropy loss on D d, and is minimized when D d is classified incorrectly. With a sufficiently over-parameterized network, gradient descent on drives both terms to zero, and we find a parameter vector that minimizes the original training set loss while failing to generalize. In other words, the minima found by are stationary points with comparable true objective function values (Eq.), indicating that it's quite possible to land in one of these "bad" minima in a normal training routine if initialized within the loss basin. Sec. 5 will show that the likelihood of this occurring is negligible. When we use the anti-generalization loss to search for bad minima near the optimization trajectory, we see that bad minima are everywhere. We visualize the distribution of bad minima in Figure 1. We run a standard SGD optimizer on the swissroll and trace out the path it takes from a random initialization to a minimizer. We plot the iterate after every tenth epoch as a red dot with opacity proportional to its epoch number. Starting from these iterates, we run the anti-generalization optimizer to find nearby bad minima. We project the iterates and bad minima into a 2D plane for visualization using a t-SNE embedding 2. Our anti-generalization optimizer easily finds minima with poor generalization within close proximity to every SGD iterate. Yet SGD avoids these bad minima, carving out a path towards a parameter configuration that generalizes well. Figure 1 illustrates that neural network optimizers are inherently biased towards good minima, a behavior commonly known as "implicit regularization." To see how the choice of optimizer affects generalization, we trained a simple neural network (VGG13) on 11 different gradient methods and 2 non-gradient methods in Figure 2 (right). This includes LBFGS (a second-order method), and ProxProp (which chooses search directions by solving least-squares problems rather than using the gradient). Interestingly, all of these methods generalize far better than the linear model. While there are undeniably differences between the performance of different optimizers, the presence of implicit regularization for virtually any optimizer strongly indicates that implicit regularization may be caused in part by the geometry of the loss function, rather than the choice of optimizer alone. Later on, we visually explore the relationship between the loss function's geometry and generalization, and how the high dimensionality of parameter space is one source of implicit regularization for optimizers. Classical PAC learning theory balances model complexity (the expressiveness of a model class) against data volume. When a model class is too expressive relative to the volume of training data, it has the ability to ace the training data while flunking the test data, and learning fails. Classical theory fails to explain generalization in over-parameterized neural nets, as the complexity of networks is often large (exponential in depth (; ;) or linear in the number of parameters (; ;) ). Therefore classical bounds become too loose or even vacuous in the over-parameterized setting that we are interested in studying. To explain this mismatch between empirical observation and classical theory, a number of recent works propose new metrics that characterize the capacity of neural networks. Most of these appeal to the PAC framework to characterize the generalization ability of a model class Θ (e.g., neural nets of a shared architecture) through a high probability upper bound: with probability at least 1 − δ, where R(θ) is generalization risk (true error) of a net with parameters θ ∈ Θ,R S (θ) denotes empirical risk (training error) with training sample S. We explain B under different metrics below. Model space complexity. This line of work takes B to be proportional to the complexity of the model class being trained, and efforts have been put into finding tight characterizations of this complexity.; built on prior works to produce bounds where model class complexity depends on the spectral norm of the weight matrices without having an exponential dependence on the depth of the network. Such bounds can improve the model class complexity provided that weight matrices adhere to some structural constraints (e.g. sparsity or eigenvalue concentration). Stability and robustness. This line of work considers B to be proportional to the stability of the model; ), which is a measure of how much changing a data point in S changes the output of the model . However it is nontrivial to characterize the stability of a neural network. Robustness, while producing insightful and effective generalization bounds, still suffers from the curse of the dimensionality on the priori-known fixed input manifold. PAC-Bayes and margin theory. PAC-Bayes bounds (; 1999; ; ; ;), provide generalization guarantees for randomized predictors drawn from a learned distribution that depends on the training data, as opposed to a learned single predictor. These bounds often yield sample complexity bounds worse than naive parameter counting, however show that PAC-Bayes theory does provide meaningful generalization bounds for "flat" minima. Model compression. Most recent theoretical work can be understood through the lens of "model compression" . Clearly, it is impossible to generalize when the model class is too big; in this case, many different parameter choices explain the data perfectly while having wildly different predictions on test data. The idea of model compression is that neural network model classes are effectively much smaller than they seem to be because optimizers are only willing to settle into a very selective set of minima. When we restrict ourselves to only the narrow set of models that are acceptable to an optimizer, we end up with a smaller model class on which learning is possible. While our focus is on gaining insights through visualizations, the intuitive arguments below can certainly be linked back to theory. The class of models representable by a network architecture has extremely high complexity, but experiments suggest that most of these models are effectively removed from consideration by the optimizer, which has an extremely strong bias towards "flat" minima, ing in a reduced effective model complexity. Over-parameterization is not specific to neural networks. A traditional approach to coping with over-parameterization for linear models is to use regularization (aka "priors") to bias the optimizer towards good minima. For linear classification, a common regularizer is the wide margin penalty (which appears in the form of an 2 regularizer on the parameters of a support vector machine). When used with linear classifiers, wide margin priors choose the linear classifier that maximizes Euclidean distance to the class boundaries while still classifying data correctly. Neural networks replace the classical wide margin regularization with an implicit regulation that promotes the closely related notion of "flatness." In this section, we explain the relationship between flat minima and wide margin classifiers, and provide intuition for why flatness is a good prior. Many have observed links between flatness and generalization. first proposed that flat minima tend to generalize well. This idea was reinvigorated by , who showed that large batch sizes yield sharper minima, and that sharp minima generalize poorly. This correlation was subsequently observed for a range of optimizers by Izmailov et al. Flatness is a measure of how sensitive network performance is to perturbations in parameters. Consider a parameter vector that minimizes the loss (i.e., it correctly classifies most if not all training data). If small perturbations to this parameter vector cause a lot of data misclassification, the minimizer is sharp; a small movement away from the optimal parameters causes a large increase in the loss function. In contrast, flat minima have training accuracy that remains nearly constant under small parameter perturbations. The stability of flat minima to parameter perturbations can be seen as a wide margin condition. When we add random perturbations to network parameters, it causes the class boundaries to wiggle around in space. If the minimizer is flat, then training data lies a safe distance from the class boundary, and perturbing the class boundaries does not change the classification of nearby data points. In contrast, sharp minima have class boundaries that pass close to training data, putting those nearby points at risk of misclassification when the boundaries are perturbed. We visualize the impact of sharpness on neural networks in Figure 3. We train a 6-layer fully connected neural network on the swiss roll dataset using regular SGD, and also using the anti-generalization loss to find a minimizer that does not generalize. The "good" minimizer has a wide margin -the class boundary lies far away from the training data. The "bad" minimizer has almost zero margin, and each data point lies near the edge of class boundaries, on small class label "islands" surrounded by a different class label, or at the tips of "peninsulas" that reach from one class into the other. The class labels of most training points are unstable under perturbations to network parameters, and so we expect this minimizer to be sharp. An animation of the decision boundary under perturbation is provided at https://www.youtube.com/watch?v=4VUJyQknf4s&t=. We can visualize the sharpness of the minima in Figure 3, but we need to take some care with our metrics of sharpness. It is known that trivial definitions of sharpness can be manipulated simply by rescaling network parameters . When parameters are small (say, 0.1), a perturbation of size 1 might cause a major performance degradation. Conversely, when parameters are large (say, 100), a perturbation of size 1 might have little impact on performance. However, rescalings of network parameters are irrelevant; commonly used batch normalization layers remove the effect of parameter scaling. For this reason, it is important to define measures of sharpness that are invariant to trivial rescalings of network parameters. One such measure is local entropy , which is invariant to rescalings, but is difficult to compute. For our purposes, we use the filter-normalization scheme proposed in , which simply rescales network filters to have unit norm before plotting. The ing sharpness/flatness measures have been observed to correlate well with generalization. The bottom of Figure 3 visualizes loss function geometry around the two minima for the swiss roll. These surface plots show the loss evaluated on a random 2D plane 3 sliced out of parameter space using the method described in. We see that the instability of class labels under parameter perturbations does indeed lead to dramatically sharper minima for the bad minimizer, while the wide margin of the good minimizer produces a wide basin. To validate our observations on a more complex problem, we produce similar sharpness plots for the Street View House Number (SVHN) classification problem in Figure 4 using ResNet-18. The SVHN dataset is ideal for this experiment because, in addition to train and test data, the creators collected a large (531k) set of extra data from the same distribution that can be used for D d in Eq.. We minimize the SVHN loss function using standard training with and without penalizing for generalization (Eq.). The good, well-generalizing minimizer is flat and achieves 97.1% test accuracy, while the bad minimizer is much sharper and achieves 28.2% test accuracy. Both achieve 100% train accuracy and use identical hyperparameters (other than the β factor), network architecture, and weight initialization. We have seen that neural network loss functions are densely populated with both good and bad minima, and that good minima tend to have "flat" loss function geometry. But what causes optimizers to find these good/flat minima and avoid the bad ones? One possible explanation to the bias of stochastic optimizers towards good minima is the volume disparity between the basins around good and bad minima. Flat minima that generalize well lie in wide basins that occupy a large volume of parameter space, while sharp minima lie in narrow basins that occupy a comparatively small volume of parameter space. As a , an optimizer using random initialization is more likely to land in the attraction basin for a good minimizer than a bad one. The volume disparity between good and bad minima is magnified by the curse (or, rather, the blessing?) of dimensionality. The differences in "width" between good and bad basins does not appear too dramatic in the visualizations in Figures 3 and 4, or in sharpness visualizations for other datasets . However, the probability of colliding with a region during a random initialization does not scale with its width, but rather its volume. Network parameters live in very highdimensional spaces where small differences in sharpness between minima translate to exponentially large disparities in the volume of their surrounding basins. It should be noted that the vanishing probability of finding sets of small width in high dimensions is well studied by probabilists, and is formalized by a variety of escape theorems . To explore the effect of dimensionality on neural loss landscapes, we quantify the local volume within the low-lying basins surrounding different minima. The volume (or "horizon") of a basin is not well-defined, especially for SGD with discrete time-steps. For this experiment, we define the "basin" to be the set of points in a neighborhood of the minimizer that have loss value below a cutoff of 0.1 (Fig. 7). We chose this definition because the volume of this set can be efficiently computed. We calculate the volume of these basins using a Monte-Carlo integration method. Let r(φ) denote the radius of the basin (distance from minimizer to basin boundary) in the direction of the unit vector φ. Then the n-dimensional volume of the basin is Γ(1+n/2) is the volume of the unit n-ball, and Γ is Euler's gamma function. We estimate this expectation by calculating r(φ) for 3k random directions, as illustrated in Figure 7. In Figure 5, we visualize the combined relationship between generalization and volume for swissroll and SVHN. By varying β, we control the generalizability of each minimizer. As generalization accuracy decreases, we see the radii of the basins decrease as well, indicating that minima become sharper. Figure 5 also contains scatter plots showing a severe correlation between generalization and (log) volume for various choices of the basin cutoff value. For SVHN, the basins surrounding good minima have a volume at least 10,000 orders of magnitude larger than that of bad minima, rendering it nearly impossible to accidentally stumble upon bad minima. Finally, we visualize the decision boundaries for several levels of generalization in Figure 6. All networks achieve above 99.5% training accuracy. As the generalization gap increases, the area that belongs to the red class begins encroaching into the area that belongs to the blue class, and vice versa. The margin between the decision boundary and training points also decreases until the training points, though correctly classified, sit on "islands" or "peninsulas" as discussed above. Neural nets solve complex classification problems by finding "flat" minima with class boundaries that assign labels that are stable to parameter perturbations. Using this intuition, can we formulate a problem that neural nets can't solve? Consider the problem of separating the blue and red dots in Figure 8. When the distance between the inner rings is large, a neural network consistently finds a well-behaved circular boundary as in Fig. 8aa. The wide margin of this classifier makes the minimizer "flat," and the ing high volume makes it likely to be found by SGD. We can remove the well-behaved minima from this problem by pinching the margin between the inner red and blue rings. In this case, a network trained with random initialization is shown in Fig. 8b. Now, SGD finds networks that cherry-pick red points, and arc away from the more numerous blue points to maintain a large margin. In contrast, a simple circular decision boundary as in Fig. 8a would pass extremely close to all points on the inner rings, making such a small margin solution less stable under perturbations and unlikely to be found by SGD. We explored the connection between generalization and loss function geometry using visualizations and experiments on classification margin and loss basin volumes, the latter of which does not appear in the literature. While experiments can provide useful insights, they sometimes raise more questions than they answer. We explored why the "large margin" properties of flat minima promote generalization. But what is the precise metric for "margin" that neural networks respect? Experiments suggest that the small volume of bad minima prevents optimizers from landing in them. But what is a correct definition of "volume" in a space that is invariant to parameter re-scaling and other transforms, and how do we correctly identify the attraction basins for good minima? Finally and most importantly: how do we connect these observations back to a rigorous PAC learning framework? The goal of this study is to foster appreciation for the complex behaviors of neural networks, and to provide some intuitions for why neural networks generalize. We hope that the experiments contained here will provide inspiration for theoretical progress that leads us to rigorous and definitive answers to the deep questions raised by generalization.
An intuitive empirical and visual exploration of the generalization properties of deep neural networks.
986
scitldr
We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, composition and grammar learning. Convolution has been an incredibly effective element in making Deep Learning successful. Applying the same set of filters across all positions in an image captures an important characteristic of the processes that generate the objects depicted in them, namely the translational symmetry of the underlying laws of nature. Given the impact of these architectures, researchers are increasingly interested in finding approaches that can be used to exploit further symmetries , such as rotation or scale. Here, we will investigate symmetries relevant to symbolic processing. We show that incorporating symmetries derived from symbolic processes into neural architectures allows them to generalise more robustly on tasks that require handling elements and structures that were not seen at training time. Specifically, we construct convolution-based models that outperform standard approaches on the rule learning task of , a simplified form of the SCAN task and a simple context free language learning task. Symbolic architectures form the main alternative to conventional neural networks as models of intelligent behaviour, and have distinct characteristics and abilities. Specifically, they form representations in terms of structured combinations of atomic symbols. Their power comes not from the atomic symbols themselves, which are essentially arbitrary, but from the ability to construct and transform complex structures. This allows symbolic processing to happen without regard to the meaning of the symbols themselves, expressed in the formalist's motto as If you take care of the syntax, the semantics will take care of itself . From this point of view, thought is a form of algebra (James, 1890; Boole, 1854) in which formal rules operate over symbolic expressions, without regard to the values of the variables they contain . As a consequence, those values can be processed systematically across all the contexts they occur in. So, for example, we do not need to know who Socrates is or even what mortal means in order to draw a valid from All men are mortal and Socrates is a man. However, connectionist approaches have been criticised as lacking this systematicity. claimed that neural networks lack the inherent ability to model the fact that cognitive capacities always exhibit certain symmetries, so that the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents. Thus, understanding these symmetries and designing neural architectures around them may enable us to build systems that demonstrate this systematicity. However, the concept of systematicity has itself drawn scrutiny and criticism from a range of researchers interested in real human cognition and behaviour. argue that the definition is too vague. Nonetheless, understanding the symmetries of symbolic processes is likely to be fruitful in itself, even where human cognition fails to fully embody that idealisation. We investigate two kinds of symmetry, relating to permutations of symbols and to equivalence between memory slots. The relation between symbols and their referents is, in principle, arbitrary, and any permutation of this correspondence is therefore a symmetry of the system. More simply, the names we give to things do not matter, and we should be able to get equivalent whether we call it rose or trandafir, as long as we do so consistently. Following on from that, a given symbol should be treated consistently wherever we find it. This can be thought of as a form of symmetry over the various slots within the data structures, such as stacks and queues, where symbols can be stored. We explore these questions using a number of small toy problems and compare the performance of architectures with and without the relevant symmetries. In each case, we use convolution as the means of implementing the symmetry, which, in practical terms, allows us to rely only on standard deep learning components. In addition, this approach opens up novel uses for convolutional architectures, and suggests connections between symbolic processes and spatial representations. The first symmetry to be considered is the one arising from the fact that the correspondence between referring atomic symbols and their referents is entirely arbitrary, which entails that permutations of these symbols are symmetries of the system. In fact, use this permutation invariance to define the difference between the logical and non-logical parts of a language. This symmetry, in which any symbol is as good as any other, is anathema to the sort of problem that neural nets are typically applied to, in which the inputs are not arbitrary names but specific measurement values, e.g. images or medical records. In that case, effective learning requires discovering the appropriate differentiations, and would be sabotaged by randomly permuting the inputs. Nonetheless, the work of suggest that this symmetry may be relevant to human cognition, even for infants as young as 7 months. In these experiments, the infants were habituated to sequences of syllables which obeyed a simple rule, such as ABB (e.g. la ti ti) or ABA (e.g. la ti la). Subsequent testing on novel stimuli showed they were able generalise this rule to syllables not present in the training stimuli (e.g. wo fe fe or wo fe wo). In other words the representation of the learned rule allowed it to be abstracted from the particular training stimuli and applied to any syllable. One interpretation is that the infants were treating the stimuli symbolically, in that one syllable was as good as any other, and that as a consequence their behaviour was symmetric under replacement of the syllables. were unable to obtain the same behaviour from a recurrent network architecture, because the statistical regularities it learned were linked to the specific syllables seen at training time and so generalisation to unseen syllables was not achieved. Here we show that this problem can be solved by imposing a symmetry on the architecture, that corresponds to weight sharing between syllables. Practically, this is implemented as a one dimensional convolution of width one, followed by maxpooling across all syllables, and a softmax to produce output probabilities. The input consists of a 12 × 3 array of binary values representing the 12 syllables and 3 time steps, with convolution treating the syllables as positions and the time steps as channels. The output of the convolution has two channels, which after pooling become the logits for the binary outputs. Note that this the opposite of how convolutions are most frequently used in application to language sequences. In that case, invariance to time translations is achieved by weight sharing across time steps, with the representation of each symbol being encoded in the channels. Here, in contrast, weight sharing happens between symbols and temporal information is encoded in the channels. This means our model is not invariant to translations in time, but is instead invariant to permutations of symbols. Figure 1 shows the input sequence wo fe wo being processed by this architecture. The input is first encoded as activation in the first and third channels at the wo position, and activation in the second channel at the fe position. Convolution reduces these three channels down to two, and pooling projects this down to a pair of logits corresponding to the ABB and ABA categories. The training and test inputs are taken from the paper, and we train the model to distinguish ABB sequences from ABA sequences. We also train a multi-layer perceptron and a recurrent net on the same data, with the recurrence happening over the time dimension. The architecture applied to the rule learning task of , consisting of convolution followed by max-pooling and softmax. The on the test set in Table 1 show that neither the multi-layer perceptron nor the recurrent network learn a rule that generalises effectively to unseen syllables. However, the weight sharing in the convolutional net requires that the same function is applied to each syllable, giving perfect generalisation. The filter, being applied at every position, cannot discriminate between syllables, and instead can only respond to the information about temporal structure in the channels. So, for example, in the case of the sequence wo fe wo, the input channels at the wo position take the values 101, representing the fact that the same token occurs in the first and third temporal slots. This is very similar to what suggest is being learned by the infants: algebra-like rules that represent relationships between placeholders (variables), such as'the first item X is the same as the third item Y'. Thus, by imposing a symmetry on the network, we learn functions that are sensitive to an abstract structure rather than the specific raw syllables in the input. Thus, convolution provides a straightforward solution to this long-standing problem using the simple expedient of sharing weights between seen and unseen syllables. Moreover, we can make further use of the insight that symmetry allows us to abstract structure away from the particular content it contains. 3 investigated the systematicity of recurrent networks in terms of a task they call SCAN. This requires translating a sequence of instructions -such as turn left and jump twice after walk -into a sequence of actions -such as LTURN WALK JUMP JUMP. They found that while a sequence-to-sequence architecture could achieve near perfect scores when the test instances closely resembled the training data, performance broke down when generalisation outside the training distribution was required. In particular, the network struggled to translate sequences that were longer than those in the training data and those that contained novel combinations of words, e.g. jump twice when both jump and twice had been seen, but their combination had not. A full solution for the SCAN task requires methods for handling a number of tricky problems in a systematic manner, e.g. parsing the instruction sequence, composing items, reversing the order of time Hidden Units Output Units jump two JUMP JUMP The encoder-decoder architecture applied to the composition task. For each input word, two embeddings are learned, one of which is concatenated with the current hidden state and the other forms the filter in a convolution applied to that input. In the decoder, actions are predicted by taking a convolution of the hidden state, and recurrence between time steps is also a convolution. constituents linked by after. Instead, we focus on a single problem: that of learning to compose an instruction such as jump with a modifier such as twice in a way that generalises systematically. Specifically, we construct a dataset in which the inputs are two word instructions drawn from a vocabulary of 10 commands (jump, walk, run, . . .) and 10 modifiers (one, two, three, . . .). Each instruction is to be translated into the corresponding action repeated the appropriate number of times, e.g. jump four → JUMP JUMP JUMP JUMP. We randomly sample with replacement 1000 such translation pairs, choose three combinations and remove all instances of them from the training data and then exclusively test on these unseen pairings of command and modifier. For a network to behave systematically, it must learn to associate a modifier, e.g. four, with a structure, e.g. repeat the same action four times, that applies to all actions indiscriminately. Following the lesson learned in the previous experiment, we might expect that the structural information might be best represented in the channels and filter of a convolution, and the actions as positions which are then treated symmetrically. However, rather than hard-code this distinction into the encoder network innately, we allow a recurrent-convolutional architecture to discover this approach for itself. Each symbol is given two input representations, one of which is used as a filter in a width one convolution and the other of which feeds into the data that the convolution is applied to. Our intention is that the former should represent structural information (how many times to repeat) and the latter represent the content (which action) within that structure. As shown in Figure 2, information flows through the model from these inputs to the output logits through a series of convolutions, which impose a permutation symmetry on the function learned in training. This invariance to permutations of the output symbols should permit the model to learn representations of structure which abstract away from the particular actions seen during training. Within the recurrent-convolutional architecture, the hidden states, h t, consist of an array of units comprising 5 channels by 11 positions, and one dimensional convolutions of width one are used as the basis for both recurrence between time steps and also to project the output logits from the hidden states. In the encoder, two embeddings x and y are learned for every word, one of which is concatenated with the hidden state, to become the 6th channel, and the other is used as the filter in the convolution that produces the next hidden state. In the decoder, two convolutional filters are learned, one of which projects the 5 hidden channels down to a single channel, to form the output logits, and the other predicts the new hidden state from the old at each time step. The targets are encoded as one-hot values in an 11-dimensional vector (10 actions + pad), and the loss is the cross entropy with the predictions. We also add an L 1 regularizer to the loss. Appendix B describes this architecture in more mathematical detail. The in Table 2 demonstrates that only the architecture exploiting the convolutional symmetry generalizes systematically. The net learns to associate a command, e.g. jump, with an action, e.g. JUMP, and a modifier, e.g. two, with an abstract structure, e.g. repeat the same thing twice. This is possible because the symmetry across symbols allows a structure to be represented in a way that makes no reference to the particular symbols instantiating that structure. The structures considered in the previous section are extremely simple, having only short-range sequential dependencies. In contrast, the real grammars of natural languages produce long-range dependencies within hierarchical structures. Handling such structures, in which multiple dependencies are embedded within each other, will typically require some form of memory in order to keep track of the unresolved outer dependencies until the inner dependencies are completed. In this section we consider the role that symmetry plays in structuring this memory, again using convolution, which allows us to separate the contents of memory from the structure of how it is manipulated. In particular, we consider a reverse recall task that captures a key property of how such a memory has to operate. For example, in the sentence The racquet is actually very cheap the subject noun, racquet, and main verb, is, display number agreement. In this case, both are singular, but they could also have been plural, i.e. racquets and are. For this simple sentence, noun and verb are adjacent so the span of the dependency is minimal. However, we can, in principle, insert as much material as we like, and this syntactic connection persists. In particular, we can add a relative clause to the noun: The racquet that the tennis player uses is actually very cheap. Now another subject noun, player, and verb, uses, intervenes between the first pair, but the dependency, and in particular the number agreement, between racquet and is has to be maintained. This remains true even when we insert another relative clause: The racquet that the tennis player we are all in awe of uses is actually very cheap. In this last case, the subject, we, and verb, are, are both plural, and effective processing of the whole sentence requires that this should not disrupt processing of the outer singular dependencies. In other words, a language user must be able to maintain a trace of multiple open dependencies until the relevant material is encountered. Notably, in the case of a centre embedded construction such as the sentence above, recall has to happen in a last-in-first-out manner as processing descends through the hierarchy and then rises back out again. That is, the subjects in the sentence above -racquet, player, we -are matched to verbs in reverse order -are, uses, is. A common model for such structures are Context Free Grammars (CFG), which generate sentences in terms of production rules, such as those in Figure 3. These rules describe how the start symbol, S, is expanded into sequences of terminals symbols, such as adoda or ccbabdodbabcc. Each rule describes a substitution that can be applied to a single non-terminal symbol, i.e. S, A, B, C or D to yield a sequence of symbols. The context free aspect of such a grammar lies in the fact the substitutions are made without regard for the context around the original non-terminal symbol. In the case of the grammar described in Figure 3, the substitutions applied to the non-terminals A, B, C and D yield a new string with the same terminal at the beginning and the end, and the final rule inserts an o. As a consequence, the ing strings are palindromes with a single o at their centre. An equivalent model is a pushdown automata (PDA), in which hierarchical structure is handled by pushing symbols representing the outer structures onto a stack, until the interior structure is completed, and then popping symbols back off the stack to move outward in the hierarchy until no (a) Production rules for a simple CFG. Figure 3: A simple CFG, producing palindromic strings. more symbols remain. Crucially, the stack has a last-in-first-out structure that essentially returns items in the reverse order in which they were pushed onto it. In principle, such a system can handle sentences of unbounded length, containing arbitrarily long dependencies between constituents. In practice, however, language users struggle with nested structures more than two or three levels deep. Moreover, it is generally accepted that an ordinary CFG is not an accurate model of the grammatical structures that natural languages display. Instead, their grammars appear to be mildly context sensitive, as shown by evidence from Swiss German and Dutch . However, here we focus on CFGs and the ability of Recurrent Neural Nets to learn these structures. In particular, we investigate the ability of a Long Short Term Memory network to learn the simple palindromic language over the terminal symbols a, b, c, d and o, defined by the grammar in Figure 3. We train an LSTM containing 100 hidden units, on 100,000 examples of strings of length 15, 17, 19, 21, 23 and 25, and then perform an in-domain test on novel strings of the same length, and an out-of-domain test on longer strings of lengths 29, 33, and 37. We also retrain the LSTM after removing all strings which contain more than 4 tokens of the symbol a from the training set, and then test only on examples from the test set containing more than 4 tokens of the symbol a. The first row of Table 3 gives the for this evaluation in terms of the proportion of symbols after the central o symbol that were predicted correctly. The in-domain , in the first column, make it clear that the net has learned the structure of the grammar, and how to make accurate predictions, at least for sequences of lengths seen at training time. The second column, containing the for longer sequences, shows that generalisation outside the range of the training set is not robust. The third column indicates that the model has difficulty generalising to sequences where the nesting of A symbols is deeper than at training time, even though the actual length of sequences is unchanged. These failures of generalisation can be seen as symptoms of the same underlying problem. In particular, success on each of these out-of-domain tasks simply requires extending the application of the same rules. However, the problem arises because the network lacks the required concept of sameness. The LSTM cells form an unstructured memory resource, without any notion of two cells containing the same information. Nor can there be a meaning to the idea of applying the same rule to that information. The parameters for each cell are learned independently, and so each cell carries out its own isolated task. During training these cells do learn to behave as a coherent whole, achieving impressive in-domain performance, but there is no way for the model to apply the same rule to the n th item as it applied to the previous n − 1. To address this shortcoming, we propose to organize the memory cells into an ordered onedimensional stack structure, and to use convolutions to control the flow of information across timesteps, replacing the forget gate. The translational symmetry of this convolutional layer gives meaning to the idea of the same symbol being able to be stored in different cells and we use a filter As shown in Figure 4, each token in the input is given an embedding which is then concatenated with the current hidden state. Together these values form the inputs to units that control the flow of information into, out of and between the memory cells, as in a standard LSTM. In this case, however, the memory cells are a set of one channel one dimensional convolutional layers and the forget gate has been replaced with a set of width three filters that shape the recurrent flow of information. These filters are the softmax outputs of units driven by the concatenated input embeddings and hidden units, allowing the network to use the input context to control the memory cell stack. Information is written to and read out from only the bottom entries in the stack, using standard input and output gates. This architecture is described in more mathematical detail in Appendix C. Performance of this convolutional LSTM on the same evaluations is given in the second row of Table 3. There, all three columns show optimal performance on both the in-domain and out-ofdomain tasks, demonstrating the utility of the convolutional layer in helping the model to generalise robustly. Numerous authors have tackled the problem of replicating the rule learning behaviour studied by in a connectionist system, and give an extensive review. Many of these approaches rely on a specific training regime to obtain the desired behaviour, rather than our approach of modifying the architecture to embed the appropriate capacities innately. However, our core intention was to demonstrate the relevance of symmetry, with convolution being a convenient and transparent means to that end. The same end could conceivably be achieved purely through learning. The SCAN task of has also stimulated a number of responses which attempt to obtain the required systematicity. uses a meta learning approach, which employed a training regime that explicitly permuted the correspondence between symbols and their meaning, i.e. between instructions and actions. This could be seen as learning an invariance to these permutations, rather than specifying it innately in the form of convolution. , instead, take an approach based on learning separate semantic and syntactic representations for each word, which is comparable to our approach of learning two embeddings for each word. The recurrent PDA we describe in Section 4 is very similar to a number of other architectures. proposed a neural network pushdown automata. proposed architectures for a number of data structures: queues, dequeues and stacks. proposed a recurrent stack structure, which in practice, is almost equivalent to our proposal. However, none of these works discuss the role of symmetry or the connection to convolution. Symmetries beyond spatial translation have been discussed by a number of authors. propose a generalisation of convolution for arbitrary discrete symmetries, such as reflections and rotations. The role of invariances in disentangled representations is discussed by , and investigate the application of probabilistic symmetries to neural network architectures. Practical examples of symmetries supporting extrapolation and generalisation beyond the training set are discussed by. Permutation invariance is relevant to a number of representational strategies, such as bag-of-words approaches (e.g.) or Deep Sets . However, the relevant symmetry in these cases is usually over permutations on the order of inputs, e.g. a symmetry between wo fe fe and fe wo fe. In our case, the permutation is over the identity of the symbols, i.e. a symmetry between wo fe fe and la ti ti. One way to address the criticisms of distributed approaches raised by has been to focus on methods for binding and combining multiple representations (; ; ; ;) in order to handle constituent structure more effectively. Here, we instead examined the role of symmetry in the systematicity of how those representations are processed, using a few simple proof-of-concept problems. We showed that imposing a symmetry on the architecture was effective in obtaining the desired form of generalisation when learning simple rules, composing representations and learning grammars. In particular, we discussed two forms of symmetry relevant to the processing of symbols, corresponding respectively to the fact that all atomic symbols are essentially equivalent and the fact that any given symbol can be represented in multiple places, yet retain the same meaning. The first of these gives rise to a symmetry under permutations of these symbols, which allows generalisation to occur from one symbol to another. The second gives rise to a symmetry across memory locations, which allows generalisation from simple structures to more complex ones. On all the problems, we implemented the symmetries using convolution. From a practical point of view, this allowed us to build networks using only long-accepted components from the standard neural toolkit. From a theoretical point of view, however, this implementation decision draws a connection between the cognition of space and the cognition of symbols. The translational invariance of space is probably the most significant and familiar example of symmetry we encounter in our natural environment. As such it forms a sensible foundation on which to build an understanding of other symmetries. In fact, use invariances under various spatial transformations within geometry as a starting point for their definition of logical notion in terms of invariance under all permutations. Moreover, from an evolutionary perspective, it is also plausible that there are common origins behind the mechanisms that support the exploitation of a variety of different symmetries, including potentially spatial and symbolic. In addition, recent research supports the idea that cerebral structures historically associated with the representation of spatial structure, such as the hippocampus and entorhinal cortex, also play a role in representing more general relational structures . Thus, our use of convolution is not merely a detail of implementation, but also an illustration of how spatial symmetries might relate to more abstract domains. In particular, the recursive push down automata, discussed in Section 4, utilises push and pop operations that relate fairly transparently to spatial translations. Of course, a variety of other symmetries, beyond translations, are likely to be important in human cognition, and an important challenge for future research will be to understand how symmetries are discovered and learned empirically, rather than being innately specified. A common theme in our exploration of symmetry, was the ability it conferred to separate content from structure. Imposing a symmetry across symbols or memory locations, allowed us to abstract away from the particular content represented to represent the structure containing it. So, for example the grammar rule learned by our network on the syllable sequences of was able to generalise from seen to unseen syllables because it represented the abstract structure of ABB and ABA sequences, without reference to the particular syllables involved. We explored how this ability could also be exploited on composition and grammar learning tasks, but it is likely that there are many other situations where such a mechanism would be useful. Informally, a symmetry of a system is a mapping of the system onto itself which preserves the fundamental properties of the system. In the case of translation symmetry, we have input images, x, and output labels, y, and we want to learn a function, f (x), which predicts these labels, and is invariant to spatial translations, T. That is, we want f to obey f (T x) = f (x). Typically, we achieve this by composing two types of function: equivariant convolutions, c, and invariant poolings, p. Equivariant here means that the output from a translated input is itself the translation of the original output: c(T x) = T c(x). While invariant means the output is unchanged by input translations: When the width of the convolution is reduced to one, the function, c, becomes equivariant to all permutations, S, not just translations: c(Sx) = Sc(x). Permutation equivariance arises, for example, in formal logic, where the rules of deduction depend not on the particular names within an expression, but on its logical structure. So, Socrates is mortal follows from Socrates is a man and All men are mortal not because of the meaning of Socrates or mortal, but because the syllogism has the right form. Thus, if x represents the premises and y represents the , then the process of deduction d(x) = y should be equivariant under any substitution, S, of names. That is, we should be able to reach an equivalent even if we rename Socrates as Bob, and so d(Sx) = Sd(x). Symmetries also arise in computational processes. For example, if x is the state of some machine containing an addressable memory and running a program that references various addresses then we want the behaviour of the machine to be equivalent if we utilise a different set of memory addresses. That is, the state transition function, f , should be equivariant under permutations, S, of states, which apply equivalent permutations of the memory content and the addresses referenced in the program: f (Sx) = Sf (x). Our approach to the composition task is based on the encoder-decoder architecture shown in Figure 2. This is made more precise in the equations below. In the encoder, each token in the input is given an N-dimensional embedding, x and an M+1 × M dimensional embedding, y. The former is concatenated to the current N × M dimensional hidden state to become the M+1th channel of a vector, g. A convolution of width one is applied to this vector to generate the next hidden state, using the second embedding as filter. The decoder takes the final hidden state from the encoder and projects it down to a single channel, using a width one convolution. These are then the logits of a softmax output, o, predicting the next action. The next hidden state is also produced by a width one convolution, in this case maintaining the number of channels. An L 1 regularisation is added to the cross entropy between the predictions, o, and a one hot encoding, a, of the correct actions. Loss = H(a, o) + λ x |x| 1 + λ y |y| 1 For the composition task, we used N=11 and M=5. C CONTEXT FREE LANGUAGE LEARNING Figure 4 gives a visual overview of the architecture we apply to learning the simple palindromic language. This is essentially a modified LSTM in which the forget gate has been replaced with convolutional filters. We define this explicitly below. Each token in the input is given an M-dimensional embedding, x, which is concatenated with the current N-dimensional hidden state, h, to give a vector, g, representing the current context. This controls N single channel width three filters, f, each of which is the output of a softmax. f n,t = softmax (W f,n g t + b f,n) (As in a standard LSTM, values written to the cells are the outputs of tanh units gated by a sigmoid. And the output gate is also a sigmoid function. Recurrence between the cells in each of the N memory stacks is based on one-dimensional convolution and controlled by the filters f . c n,t = f n,t c n,t−1 The 0 th values in each stack are updated using the values of i. c n,t,0 = c n,t,0 + i t,n And the new hidden state are also read out from the 0 th values, gated by tanh units. h t+1,n = o t,n · tanh (c n,t,0) The final outputs, to predict a one hot vector representing the next symbol, apply a softmax to these new hidden states. The loss is the cross entropy over the second half of the sequence, and for this task M and N are both 10, with the memory stack having a depth of 20.
We use convolution to make neural networks behave more like symbolic systems.
987
scitldr
Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.
Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO
988
scitldr
In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time. In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data. To achieve this, for each class, we train a specific Invertible Neural Network to output the zero vector for its class. At test time, we can predict the class of a sample by identifying which network outputs the vector with the smallest norm. With this method, we show that we can take advantage of pretrained models by stacking an invertible network on top of a features extractor. This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets. In our experiments, we are reaching 72% accuracy on CIFAR-100 after training our model one class at a time. A typical Deep Learning workflow consists in gathering data, training a model on this data and finally deploying the model in the real world . If one would need to update the model with new data, it would require to merge the old and new data and process a training from scratch on this new dataset. Nevertheless, there are circumstances where this method may not apply. For example, it may not be possible to store the old data because of privacy issues (health records, sensible data) or memory limitations (embedded systems, very large datasets). In order to address those limitations, recent works propose a variety of approaches in a setting called Continual Learning . In Continual Learning, we aim to learn the parameters w of a model on a sequence of datasets with the inputs x j i ∈ X i and the labels y j i ∈ Y i, to predict p(y * |w, x *) for an unseen pair (x *, y *). The training has to be done on each dataset, one after the other, without the possibility to reuse previous datasets. The performance of a Continual Learning algorithm can then be measured with two protocols: multi-head or single-head. In the multi-head scenario, the task identifier i is known at test time. For evaluating performances on task i, the set of all possible labels is then Y = Y i. Whilst in the single-head scenario, the task identifier is unknown, in that case we have Y = ∪ N i=1 Y i with N the number of tasks learned so far. For example, let us say that the goal is to learn MNIST sequentially with two batches: using only the data from the first five classes and then only the data from the remaining five other classes. In multi-head learning, one asks at test time to be able to recognize samples of 0-4 among the classes 0-4 and samples of 5-9 among classes 5-9. On the other hand, in single-head learning, one can not assume from which batch a sample is coming from, hence the need to be able to recognize any samples of 0-9 among classes 0-9. Although the former one has received the most attention from researchers, the last one fits better to the desiderata of a Continual Learning system as expressed in and (van de). The single-head scenario is also notoriously harder than its multi-head counterpart and is the focus of the present work. Updating the parameters with data from a new dataset exposes the model to drastically deteriorate its performance on previous data, a phenomenon known as catastrophic forgetting . To alleviate this problem, researchers have proposed a variety of approaches such as storing a few samples from previous datasets , adding distillation regularization , updating the parameters according to their usefulness on previous datasets , using a generative model to produce samples from previous datasets . Despite those efforts toward a more realistic setting of Continual Learning, one can notice that, most of the time, are proposed in the case of a sequence of batches of multiple classes. This scenario often ends up with better accuracy (because the learning procedure highly benefits of the diversity of classes to find the best tuning of parameters) but it does not illustrate the behavior of those methods in the worst case scenario. In fact, Continual Learning algorithms should be robust in the size of the batch of classes. In this work, we propose to implement a method specially designed to handle the case where each task consists of only one class. It will therefore be evaluated in the single-head scenario. Our approach, named One-versus-All Invertible Neural Networks (OvA-INN), is based on an invertible neural network architecture proposed by. We use it in a One-versus-All strategy: each network is trained to make a prediction of a class and the most confident one on a sample is used to identify the class of the sample. In contrast to most other methods, the training phase of each class can be independently executed from one another. The contributions of our work are: (i) a new approach for Continual Learning with one class per batch; (ii) a neural architecture based on Invertible Networks that does not require to store any of the previous data; (iii) state-of-the-art on several tasks of Continual Learning for Computer Vision (CIFAR-100, MNIST) in this setting. We start by reviewing the closest methods to our approach in Section 2, then explain our method in Section 3, analyse its performances in Section 4 and identify limitations and possible extensions in Section 5. Generative models Inspired by biological mechanisms such as the hippocampal system that rapidly encodes recent experiences and the memory of the neocortex that is consolidated during sleep phases, a natural approach is to produce samples of previous data that can be added to the new data to learn a new task. FearNet relies on an architecture based on an autoencoder, whereas Deep Generative Replay and Parameter Generation and Model Adaptation propose to use a generative adversarial network. Those methods present good but require complex models to be able to generate reliable data. Furthermore, it is difficult to assess the relevance of the generated data to conduct subsequent training iterations. Coreset-based models These approaches alleviate the constraint on the availability of data by allowing the storage of a few samples from previous data (which are called coreset). iCaRL and End-to-end IL store 2000 samples from previous batches and rely on respectively a distillation loss and a mixture of cross-entropy and distillation loss to alleviate forgetting. The authors of SupportNet have also proposed a strategy to select relevant samples for the coreset. Gradient Episodic Memory ensures that gradients computed on new tasks do not interfere with the loss of previous tasks. Those approaches give the best for single-head learning. But, similarly to generated data, it is not clear which data may be useful to conduct further training iterations. In this paper, we are challenging the need of the coreset for single-head learning. Distance-based models These methods propose to embed the data in a space which can be used to identify the class of a sample by computing a distance between the embedding of the sample and a reference for each class. Among the most popular, we can cite Matching Networks and Prototypical Networks , but these methods have been mostly applied to few-shot learning scenarios rather than continual. Regularization-based approaches These approaches present an attempt to mitigate the effect of catastrophic forgetting by imposing some constraints on the loss function when training subsequent classes. Elastic Weight Consolidation , Synaptic Intelligence and Memory Aware Synapses all seek to prevent the update of weights that were the most useful to discriminate between previous classes. Hence, it is possible to constrain the learning of a new task in such a way that the most relevant weights for the previous tasks are less susceptible to be updated. Learning without forgetting proposes to use knowledge distillation to preserve previous performances. The network is divided in two parts: the shared weights and the dedicated weights for each task. When learning a new task A, the data of A get assigned "soft" labels by computing the output by the network with the dedicated weight for each previous task. Then the network is trained with the loss of task A and is also constrained to reproduce the recorded output for each other tasks. , the authors propose to use an autoencoder to reconstruct the extracted features for each task. When learning a new task, the features extractor is adapted but has to make sure that the autoencoder of the other tasks are still able to reconstruct the extracted features from the current samples. While these methods obtain good for learning one new task, they become limited when it comes to learn several new tasks, especially in the one class per batch setting. Expandable models In the case of the multi-head setting, it has been proposed to use the previously learned layers and complete them with new layers trained on a new task. This strategy is presented in Progressive Networks . In order to reduce the growth in memory caused by the new layers, the authors of Dynamically Expandable Networks proposed an hybrid method which retrains some of the previous weights and add new ones when necessary. Although these approaches work very well in the case of multi-head learning, they can not be adapted to single-head and are therefore not included in benchmarks with OvA-INN. We investigate the problem of training several datasets in a sequential fashion with batches of only one class at a time. Most approaches of the state-of-the-art rely on updating a features extractor when data from a new class is available. But this strategy is unreliable in the special case we are interested in, namely batches of data from only one class. With few or no sample of negative data, it is very inefficient to update the weights of a network because the setting of deep learning normally involves vast amounts of data to be able to learn to extract valuable features. Without enough negative samples, the training is prone to overfit the new class. Recent works have proposed to rely on generative models to overcome this lack of data by generating samples of old classes. Nevertheless, updating a network with sampled data is not as efficient as with real data and, on the long run, the generative quality of early classes suffer from the multiple updates. Our approach consists in interpreting a Continual Learning problem as several out-of-distribution (OOD) detection problems. OOD detection has already been studied for neural networks and can be formulated as a binary classification problem which consists in predicting if an input x was sampled from the same distribution as the training data or from a different distribution ). Hence, for each class, we can train a network to predict if an input x is likely to have been sampled from the distribution of this class. The class with the highest confidence can be used as the prediction of the class of x. This training procedure is particularly suitable for Continual Learning since the training of each network does not require any negative sample. Using the same protocol as NICE , for a class i, it is possible to train a neural network f i to fit a prior distribution p and compute the exact log-likelihood l i on a sample x: To obtain the formulation of log-likelihood as expressed in Equation 1, the network f i has to respect some constraints discussed in Section 3.3. Keeping the same hypothesis as NICE, we consider the x n/2:n y 0:n/2 y n/2:n Figure 1: Forward pass in an invertible block. x is split in x 0:n/2 and x n/2:n. f 1 and f 2 can be any type of Neural Networks as long as the dimension of their output dimension is the same as their input dimension. In our experiments, we stack two of these blocks one after the other and use fully-connected feedforward layers for f 1 and f 2. case where p is a distribution with independent components p d: In our experiments, we considered p d to be standard normal distributions. Although, it is possible to learn the parameters of the distributions, we found experimentally that doing so decreases the . Under these design choices, the computation of the log-likelihood becomes: where β = −n log √ 2π is a constant term. Hence, identifying the network with the highest log-likelihood is equivalent to find the network with the smallest output norm. The neural network architecture proposed by NICE is designed to operate a change of variables between two density functions. This assumes that the network is invertible and respect some constraints to make it efficiently computable. An invertible block (see Figure 1) consists in splitting the input x into two subvectors x 1 and x 2 of equal size; then successively applying two (non necessarily invertible) networks f 1 and f 2 following the equation: and finally, concatenate y 1 and y 2. The inverse operation can be computed with: These invertible equations illustrate how Invertible Networks operate a bijection between their input and their output. We propose to specialize each Invertible Network to a specific class by training them to output a vector with small norm when presented with data samples from their class. Given a dataset X i of class i and an Invertible Network f i, our objective is to minimize the loss L: Once the training has converged, the weights of this network won't be updated when new classes will be added. At inference time, after learning t classes, the predicted class y * for a sample x is obtained by running each network and identifying the one with the smallest output: As it is common practice in image processing, one can also use a preprocessing step by applying a fixed pretrained features extractor beforehand. We compare our method against several state-of-the-art baselines for single-head learning on MNIST and CIFAR-100 datasets. Topology of OvA-INN Due to the bijective nature of Invertible Networks, their output size is the same as their input size, hence the only way to change their size is by changing the depth or by compressing the parameters of the intermediate networks f 1 and f 2. In our experiments, these networks are fully connected layers. To reduce memory footprint, we replace the square matrix of parameters W of size n × n by a product of matrices AB of sizes n × m and m × n (with a compressing factor for the first and second block m = 16 for MNIST and m = 32 for CIFAR-100). More details on the memory cost can be found in Appendix A. Regularization When performing learning one class at a time, the amount of training data can be highly reduced: only 500 training samples per class for CIFAR-100. To avoid overfitting the training set, we found that adding a weight decay regularization could increase the validation accuracy. More details on the hyperparameters choices can be found in Appendix B. Rescaling As ResNet has been trained on images of size 224×224, we rescale CIFAR-100 images to match the size of images from Imagenet. We start by considering the MNIST dataset , as it is a common benchmark that remains challenging in the case of single-head Continual Learning. Generative models: -Parameter Generation and Model Adaptation (PGMA) -Deep Generative Replay (DGR) Coreset-based models: -iCaRL -SupportNet For Parameter Generation and Model Adaptation (PGMA) and Deep Generative Replay (DGR), we report the from the original papers; whereas we use the provided code of SupportNet to compute the for iCaRL and SupportNet with the conventional architecture of two layers of convolutions with poolings and a fully connected last layer. We have also set the coreset size to s = 800 samples. 81.7 6,000k 2 by 2 SupportNet 89.9 940k 2 by 2 DGR 95.8 12,700k 2 by 2 iCaRL We report the average accuracy over all the classes after the networks have been trained on all batches (See Table 1). Our architecture does not use any pretrained features extractor common to every classes (contrarily to our CIFAR-100 experiment): each sample is processed through an Invertible Network, composed of two stacked invertible blocks. Our approach presents better than all the other reference methods while having a smaller cost in memory (see Appendix A) and being trained by batches of only one class. Also, our architecture relies on simple fully-connected layers (as parts of invertible layers) whilst the other baselines implement convolutional layers. We now consider a more complex image dataset with a greater number of classes. This allows us to make comparisons in the case of a long sequence of data batches and to illustrate the value of using a pretrained features extractor for Continual Learning. Distance-based model: -Nearest prototype: our implementation of the method consisting in computing the mean vector (prototype) of the output of a pretrained ResNet32 for each class at train time. Inference is performed by finding the closest prototype to the ResNet output of a given sample. Generative model: -FearNet : uses a pretrained ResNet48 features extractor. FearNet is trained with a warm-up phase. Namely, the network is first trained with the all the first 50 classes of CIFAR-100, and subsequently learns the next 50 classes one by one in a continual fashion. Coreset-based models: -iCaRL : retrains a ResNet32 architecture on new data with a distillation loss. -End-to-end IL : retrains a ResNet32 architecture on new data with a crossentropy together with distillation loss. The data is provided by batch of classes. When the training on a batch (D i) is completed, the accuracy of the classifier is evaluated on the test data of classes from all previous batches (D 1, ..., D i). We report the from the literature with various size of batch when they are available. OvA-INN uses the weights of a ResNet32 pretrained on ImageNet and never update them. FearNet also uses pretrained weights from a ResNet. iCaRL and End-to-End IL use this architecture but retrain it from scratch at the beginning and fine-tune it with each new batch. The performance of the Nearest prototype baseline proves that there is high benefit in using pretrained features extractor on this kind of dataset. quickly deteriorate compared to those using pretrained parameters. Even with larger batches of classes, the gap is still present. It can be surpising that at the end of its warm-up phase, FearNet still has an accuracy bellow OvA-INN, even though it has been trained on all the data available at this point. It should be noted that FearNet is training an autoencoder and uses its encoding part as a features extractor (stacked on the ResNet) before classifying a sample. This can diminish the discriminative power of the network since it is also constrained to reproduce its input (only a single autoencoder is used for all classes). To further understand the effect of an Invertible Network on the feature space of a sample, we propose to project the different features spaces in 2D using t-SNE . We project the features of the five first classes of CIFAR-100 test set (see Figure 3). Classes that are already well represented in a cluster with ResNet features (like violet class) are clearly separated from the clusters of Invertible Networks. Classes represented with ambiguity with ResNet features (like light green and red) are better clustered in the Invertible Network space. A limiting factor in our approach is the necessity to add a new network each time one wants to learn a new class. This makes the memory and computational cost of OvA-INN linear with the number of classes. Recent works in networks merging could alleviate the memory issue by sharing weights or relying on weights superposition . This being said, we showed that Ova-INN was able to achieve superior accuracy on CIFAR-100 class-by-class training than approaches reported in the literature, while using less parameters. Another constraint of using Invertible Networks is to keep the size of the output equal to the size of the input. When one wants to apply a features extractor with a high number of output channels, it can have a very negative impact on the memory consumption of the invertible layers. Feature Selection or Feature Aggregation techniques may help to alleviate this issue . Finally, we can notice that our approach is highly dependent on the quality of the pretrained features extractor. In our CIFAR-100, we had to rescale the input to make it compatible with ResNet. Nonetheless, recent research works show promising in training features extractors in very efficient ways . Because it does not require to retrain its features extractor, we can foresee better performance in class-by-class learning with OvA-INN as new and more efficient features extractors are discovered. As a future research direction, one could try to incorporate our method in a Reinforcement Learning scenario where various situations can be learned separately in a first phase (each situation with its own Invertible Network). Then during a second phase where any situation can appear without the agent explicitly told in which situation it is in, the agent could rely on previously trained Invertible Networks to improve its policy. This setting is closely related to Options in Reinforcement Learning. Also, in a regression setting, one can add a fully connected layer after an intermediate layer of an Invertible Network and use it to predict the output for the trained class. At test time, one only need to read the output from the regression layer of the Invertible Network that had the highest confidence. In this paper, we proposed a new approach for the challenging problem of single-head Continual Learning without storing any of the previous data. On top of a fixed pretrained neural network, we trained for each class an Invertible Network to refine the extracted features and maximize the loglikelihood on samples from its class. This way, we show that we can predict the class of a sample by running each Invertible Network and identifying the one with the highest log-likelihood. This setting allows us to take full benefit of pretrained models, which in very good performances on the class-by-class training of CIFAR-100 compared to prior works. channels with 5 × 5 kernel, a fully-connected layer with 100 channels applied on an input of size 7 × 7 and a final layer of 10 channels: S iCaRL,MNIST = 28 × 28 × 800 + (5 × 5 + 1) × 32 + (5 × 5 + 1) × 64 + (7 × 7 × 64 + 1) × 100 + (100 + 1) × 10 = 944406 Since every method rely on a ResNet32 (around 20M parameters) to compute their features (except FearNet which uses ResNet48). We do not count the features extractor in the memory consumption. OvA-INN uses 2 blocks with 2 layers (f 1 and f 2) for 100 classes. The weight matrix of each layer W is a product of two matrices A and B of size 256 × 32 and 32 × 256. The memory required is: S OvA-INN,CIFAR = (256 × 32 × 2 + 256) × 2 × 2 × 100 = 6656000 We use the default coreset size s = 2000 of iCaRL and End-to-End IL with each image of size 32 × 32: S iCaRL,CIFAR = 32 × 32 × 3 × 2000 = 6144000 Our implementation is done with Pytorch , using the Adam optimizer and a scheduler that reduces the learning rate by a factor of 0.5 when the loss stops improving. We use the resize transformation from torchvision with the default bilinear interpolation. C TASK-BY-TASK LEARNING We provide additional experimental on the multi-head learning of CIFAR100 with 10 tasks of 10 classes each. The training procedure of OvA-INN does not change from the usual single-head learning but, at test time, the evaluation is processed by batches of 10 classes. The accuracy score is the average accuracy over all 10 tasks. We report the from . Although our approach is able to match state-of-the-art in accuracy, it should be noticed that it is drastically more memory and time consuming than the other baselines. EWC 81.34 Progressive Networks 88.19 DEN 92.
We propose to train an Invertible Neural Network for each class to perform class-by-class Continual Learning.
989
scitldr
Human-computer conversation systems have attracted much attention in Natural Language Processing. Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (namely a query) in a large conversational repository and return a reply that best matches the query. Generative approaches synthesize new replies. Both ways have certain advantages but suffer from their own disadvantages. We propose a novel ensemble of retrieval-based and generation-based conversation system. The retrieved candidates, in addition to the original query, are fed to a reply generator via a neural network, so that the model is aware of more information. The generated reply together with the retrieved ones then participates in a re-ranking process to find the final reply to output. Experimental show that such an ensemble system outperforms each single module by a large margin. Automatic human-computer conversation systems have long served humans in domain-specific scenarios. A typical approach for such systems is built by human engineering, for example, using manually constructed ontologies ), natural language templates ), and even predefined dialogue state tracking BID29 ).Recently, researchers have paid increasing attention to open-domain, chatbot-style human-computer conversations such as XiaoIce 1 and Duer 2 due to their important commercial values. For opendomain conversations, rules and templates would probably fail since they hardly can handle the great diversity of conversation topics and flexible representations of natural language sentences. With the increasing popularity of on-line social media and community question-answering platforms, a huge number of human-human conversation utterances are available on the public Web BID32; BID13 ). Previous studies begin to develop data-oriented approaches, which can be roughly categorized into two groups: retrieval systems and generative systems. When a user issues an utterance (called a query), the retrieval-based conversation systems search a corresponding utterance (called a reply) that best matches the query in a pre-constructed conversational repository BID10; BID11 ). Owing to the abundant web resources, the retrieval mechanism will always find a candidate reply given a query using semantic matching. The retrieved replies usually have various expressions with rich information. However, the retrieved replies are limited by the capacity of the pre-constructed repository. Even the best matched reply from the conversational repository is not guaranteed to be a good response since most cases are not tailored for the issued query. To make a reply tailored appropriately for the query, a better way is to generate a new one accordingly. With the prosperity of neural networks powered by deep learning, generation-based conversation systems are developing fast. Generation-based conversation systems can synthesize a new sentence as the reply, and thus bring the of good flexibility and quality. A typical generationbased conversation model is seq2seq BID23; BID22; BID20 ), in which two recurrent neural networks (RNNs) are used as the encoder and the decoder. The encoder is to capture the semantics of the query with one or a few distributed and real-valued vectors (also known as embeddings); the decoder aims at decoding the query embeddings to a reply. Long short term memory (LSTM) BID8 Table 1: Characteristics of retrieved and generated replies in two different conversational systems.(GRUs) BID3 ) could further enhance the RNNs to model longer sentences. The advantage of generation-based conversation systems is that they can produce flexible and tailored replies. A well known problem for the generation conversation systems based on "Seq2Seq" is that they are prone to choose universal and common generations. These generated replies such as "I don't know" and "Me too" suit many queries BID20 ), but they contain insufficient semantics and information. Such insufficiency leads to non-informative conversations in real applications. Previously, the retrieval-based and generation-based systems with their own characteristics, as listed in Table 1, have been developed separately. We are seeking to absorb their merits. Hence, we propose an ensemble of retrieval-based and generation-based conversation systems. Specifically, given a query, we first apply the retrieval module to search for k candidate replies. We then propose a "multi sequence to sequence" (multi-seq2seq) model to integrate each retrieved reply into the Seq2Seq generation process so as to enrich the meaning of generated replies to respond the query. We generate a reply via the multi-seq2seq generator based on the query and k retrieved replies. Afterwards, we construct a re-ranker to re-evaluate the retrieved replies and the newly generated reply so that more meaningful replies with abundant information would stand out. The highest ranked candidate (either retrieved or generated) is returned to the user as the final reply. To the best of our knowledge, we are the first to build a bridge over retrieval-based and generation-based modules to work out a solution for an ensemble of conversation system. Experimental show that our ensemble system consistently outperforms each single component in terms of subjective and objective metrics, and both retrieval-based and generation-based methods contribute to the overall approach. This also confirms the rationale for building model ensembles for conversation systems. In early years, researchers mainly focus on domain-specific conversation systems, e.g., train routing BID0 ) and human tutoring BID6 ). Typically, a pre-constructed ontology defines a finite set of slots and values, for example, cuisine, location, and price range in a food service conversation system; during human-computer interaction, a state tracker fills plausible values to each slot from the user input, and recommend the restaurant that best meets the user's requirement BID30; BID17; ).In the open domain, however, such slot-filling approaches would probably fail because of the diversity of topics and natural language utterances. BID10 apply information retrieval techniques to search for related queries and replies. BID11 and BID32 use both shallow hand-crafted features and deep neural networks for matching. BID13 propose a random walk-style algorithm to rank candidate replies. In addition, their model can incorporate additional content (related entities in the conversation context) by searching a knowledge base when a stalemate occurs during human-computer conversations. Generative conversation systems have recently attracted increasing attention in the NLP community. BID18 formulate query-reply transformation as a phrase-based machine translation. BID36 use two RNNs in encoder and one RNN in decoder to translate a sentence into two different languages into another language. Since the last year, the renewed prosperity of neural networks witnesses an emerging trend in using RNN for conversation systems BID25; BID26; BID23; BID22; BID20 ). The prevalent structure is the seq2seq model BID25 ) which comprises of one encoder and one decoder. However, a known issue with RNN is that it prefers to generate short and meaningless utterances. Following the RNN-based approach, BID12 propose a mutual information objective in contrast to the conventional maximum likelihood criterion. BID16 and BID31 Figure 1: The overall architecture of our model ensemble. We combine retrieval-based and generation-based conversation systems with two mechanisms. The first ensemble is to enhance the generator with the retrieved candidates. The second is the re-ranking of both candidates.generation. BID7 uses knowledge base for answer generation in question answering task and BID14 investigates different attention strategies in multi-source generation. To the best of our knowledge, the two main streams namely retrieval-based and generation-based systems have developed independently, and we are the first to combine these two together. In the following section, we depict the whole picture of our ensemble framework, and describe how to integrate those two modules in detail.3 MODEL ENSEMBLE 3.1 OVERVIEW Figure 1 depicts the overview of our proposed conversation system that ensembles the retrieval-based and generation-based approaches. It consists of the following components. We briefly describe each component, then present the details in the following sub-sections.• Retrieval Module. We have a pre-constructed repository consisting millions of query-reply pairs q *, r *, collected from human conversations. When a user sends a query utterance q, our approach utilizes a state-of-the-practice information retrieval system to search for k best matched queries (q *), and return their associated replies r * as k candidates.• Generation Module. We propose the multi-seq2seq model, which takes the original query q and k retrieved candidate replies r * 1, r * 2,..., r * k as input, and generates a new reply r +. Thus the generation process could not only consider about the given query, but also take the advantage of the useful information from the retrieved replies. We call it the first ensemble in our framework.• Re-ranker. Finally, we develop a re-ranker to select the best reply r from the k + 1 candidates obtained from retrieval-based and generation-based modules. Through the ensemble of retrievalbased and generation-based conversation, the enlarged candidate set enhances the quality of the final . We call this procedure the second ensemble in our framework. The information retrieval-based conversation is based on the assumption that the appropriate reply to the user's query is contained by the pro-constructed conversation datasets. We collect huge amounts of conversational corpora from on-line chatting platforms, whose details will be described in the section of evaluation. Each utterance and its corresponding reply form a pair, denoted as q *, r *.Based on the pre-constructed dataset, the retrieval process can be performed using an the state-ofthe-practice information retrieval system. We use a Lucene 3 powered system for the retrieval implementation. We construct the inverted indexes for all the conversational pairs at the off-line stages. When a query is issued, the keyword search with the tf.idf weighting schema will be performed to retrieve several q * that match the user's query q. Given the retrieved q *, the associated r * will be returned as the output, in an indirect matching between the user's query q and the retrieved reply r *. The retrieval systems would provide more than one replies and score them according to the semantic matching degree, which is a traditional technic in information retrieval. As the top ranked one may not perfectly match the query, we keep top-k replies for further process. The information retrieval is a relatively mature technique, so the retrieval framework can be alternated by any systems built keep to the following principles. Figure 2: The multi-seq2seq model, which takes a query q and k retrieved candidate replies r * as the input and generate a new reply r + as the output. The technique of neural networks has become a popular approach to build end-to-end trainable conversation systems BID16 ). A generation-based conversation system is able to synthesize new utterances, which is complementary to retrieval-based methods. The seq2seq model BID25 ), considering the Recurrent Neural Network (RNNs) as the encoder and decoder to transfer source sentence to target sentence, has long been used for generation tasks. The objective function for the seq2seq model in our scenario is the log-likelihood of the generated reply r + given the query q: DISPLAYFORM0 Since the reply is generated on the conditional probabilities given the query, the universal replies which have relatively higher probabilities achieve higher rankings. However, these universal sentences contain less information, which impair the performance of generative systems. BID16 also observe that in open-domain conversation systems, if the query does not carry sufficient information, seq2seq tends to generate short and meaningless sentences. Different from the pipeline in seq2seq model, we propose the multi-seq2seq model (Figure 2), which synthesizes a tailored reply r + by using the information both from the query q and the retrieved r * 1, r * 2,..., r * k. multi-seq2seq employs k + 1 encoders, one for query and other k for retrieved r *. The decoder receives the outputs of all encoders, and remains the same with traditional seq2seq for sentence generation. multi-seq2seq model improves the quality of the generated reply in two ways. First, the newly generated reply conditions not only on the given query but also on the retrieved reply. So the probability of universal replies would decrease since we add an additional condition. The objective function can be written as: DISPLAYFORM1 Thus the r + would achieve higher score only if it has a high concurrency with both q and r * 1, r * 2,..., r * k. Second, the retrieved replies r * 1, r * 2,..., r * k are the human-produced utterances and probably contain more information, which could be used as the additional information for the generated reply r +. Hence, the generated reply can be fluent and tailored to the query, and be more meaningful due to the information from the retrieved candidates. To take advantage of retrieved replies, we propose to integrate attention and copy mechanisms into decoding process. Attention helps the decoder to decide which parts in each retrieved reply are useful for current generation step. Copy mechanism directly extracts proper words from encoders, namely both query and retrieved replies, and utilizes them as the output words during the decoding process.• Two-level Attention. multi-seq2seq conducts sentence-and character-level attention to make better use of the query and retrieved replies. As multiple replies are of uneven quality, we use sentence-level attention to assign different importance to each retrieved replies. Similarly, multiple words are of uneven quality in a sentence, we use character-level attention to measure different importance to each words in retrieved replies. Specifically, for the sentence-level, we use k + 1 vectors obtained from the encoders to capture the information of q and the k r *, denoted as q and r) to multi-source attention to introduce retrieved replies, given by DISPLAYFORM2 where c i is the context vector at each time step in decoding, which integrates query and all possible words in k retrieved replies. l is the length of query, h j is the hidden state of query, l m is the length of r * m, h m,j is the hidden state of r * m. s i is the hidden state of decoder at time step i, α i,m,j is the normalized attention weights for each word. e i,m,j is calculated by a bilinear matching function and M a is the parameter matrix.• Copy Mechanism. multi-seq2seq also uses copy mechanism to explicitly extract words from the retrieved replies. For each word y t in vocabulary V, the probability p(y t |s t) in decoding process is comprised of k + 1 parts. The first part p ori follows the original probability calculated by GRU/LSTM cells, and the following parts p r * m reflect the matching degree between the current state vector s t and the corresponding states of y t in encoders, given by, DISPLAYFORM3 DISPLAYFORM4 where h yt,m is the hidden states of retrieved reply r * m who responds y t in decoder, δ(·) is the sigmoid function, M c is the parameter for matching s t and h yt,m. If y t has not appeared in a retrieved replies r * m, the corresponding probabilities p r * m would be zero. Both attention and copy mechanism aim to enrich the generated reply r + via useful and informative words extracted from retrieved replies r * 1, r * 2,..., r * k. Figure 2 displays the design of multi-seq2seq model. We can see that the generated reply has the corresponding relation with the query, and absorbs the keywords from the retrieved replies. Now that we have k retrieved candidate replies r * as well as a generated one r +. As all the retrieved candidates are obtained via indirect matching, these replies need a further direct matching with the user-issued query. On the other hand, the generated reply set may contain the influent and meaningless utterances. Hence, we propose the second ensemble to derive the final ranking list by feeding all the candidates into a re-ranker. We deploy a Gradient Boosting Decision Tree (GBDT) BID34 ) classifier since it is believed to have ability to handle the replies with various traits. The GBDT classifier utilizes several highlevel features, as listed in the following. The first four are pairwise feature, and the last two are features based on the properties of replies.• Term similarity. The word overlap ratio captures the literal similarity between the query and reply. For both query and reply, we transform them into binary word vectors, in which each element indicates if a word appears in the corresponding sentence. Then we apply the cosine function to calculate the term overlap similarity of the query and the reply.• Entity similarity. Named entities in utterances are a special form of terms. We distinguish persons, locations and organizations from plain texts with the help of named entity recognition techniques. Then we maintain the vectors of recognized entities for both query and its reply and calculate the similarity (measured by cosine similarity) between two entity-based vector representations.• Topic similarity. "Topics" has long been regarded as the abstractive semantic representation (Hofmann FORMULA0). We apply Latent Dirichlet Allocation BID2 ) to discover the latent topics of the query and reply. The inferred topic representation is the probabilities for the piece of text belonging to each latent topic. By setting the topic number as 1000, which works efficiently in practice, we use the cosine similarity to calculate the topical score between vectors of latent topics.• Statistical Machine Translation. By treating queries and replies as different languages in the paradigm of machine translation, we train a translation model to "translate" the query into a reply based on the training corpora to get the translating word pairs (one word from a query and one word from its corresponding reply) with scores indicating their translating possibilities. To get the translation score for the query and reply, we sum over the translating scores of the word pairs extracted from these two sentences, and conduct normalization on the final score.• Length. Since too short replies are not preferred in practical conversation systems, we take the length of replies as a point-wise feature. We conduct a normalization to map the value to.• Fluency. Fluency is to examine whether two neighboring terms have large co-occurrence likelihood. We calculate the co-occurrence probability for the bi-grams of the candidate replies and then take the average value as the fluency. The confidence scores produced by the GBDT classifier are used to re-rank all the replies. The re-ranking mechanism can eliminate both meaningless short replies that are eventually generated by multi-seq2seq and less appropriate replies selected by the retrieval system. The re-ranker further ensures an optimized effect of model ensemble. Since our framework consists of learnable but independent components (i.e., multi-seq2seq and Re-ranker), the model training is constructed for each component separately. In multi-seq2seq, we use human-human utterance pairs q, r as data samples. k retrieved candidates r * are also provided as the input when we train the neural network. Standard crossentropy loss of all words in the reply is applied as the training objective, given by, DISPLAYFORM0 where J is the objective of trainning, T is the length of r and t (i) is the one-hot vector of the next target word in the reply, serving as the ground-truth, y j is the probability of a word obtained from the softmax function, and V is the vocabulary size. In the re-ranker part, the training samples are either q, r pairs or generated by negative sampling. We evaluate our ensemble model on our established conversation system in Chinese. Both retrieval-based and generation-based components require a large database of query-reply pairs, whose statistics is exhibited in TAB4. To construct a database for information retrieval, we collected human-human utterances from massive online forums, microblogs, and question-answering communities, including Sina Weibo 4 and Baidu Tieba. 5 In total, the database contains 7 million query-reply pairs for retrieval. For each query, corresponding to a question, we retrieve k replies (k = 2) for generation part and re-ranker. For the generation part, we use the dataset comprising 1,606,741 query-reply pairs originating from Baidu Tieba. Please note that q and r * are the input of multi-seq2seq, whose is supposed to should approximate the ground-truth. We randomly selected 1.5 million pairs for training and 100K pairs for validation. The left 6,741 pairs are used for testing both in generation part and the whole system. Notice that this corpus is different from the corpus used in the retrieval part so that the ground-truth of the test data are excluded in the retrieval module. The training-validation-testing split remains the same for all competing models. Human Score BLEU-1 BLEU-2 BLEU-3 BLEU-4 BID22 To train our neural models, we implement code based on dl4mt-tutorial 6, and follow BID22 for hyper-parameter settings as it generally works well in our model. We did not tune the hyperparameters, but are willing to explore their roles in conversation generation in future. All the embeddings are set to 620-dimension and the hidden states are set to 1000-dimension. We apply AdaDelta with a mini-batch size of 80. Chinese word segmentation is performed on all utterances. We keep the same set of 100k words for all encoders and 30K for the decoder due to efficiency concerns. The validation set is only used for early stop based on the perplexity measure. We compare our model ensemble with each individual component and provide a thorough ablation test. Listed below are the competing methods in our experiments. For each method, we keep one best reply as the final to be assessed. All competing methods are trained in the same way as our full model, when applicable, so that the comparison is fair.• Retrieval-1, Retrieval-2. The top and second ranked replies for the user-issued query from a state-of-the-practice conversation system ), which is a component of our model ensemble; it is also a strong baseline (proved in our experiments).• seq2seq. An encoder-decoder framework BID25 ), first introduced as neural responding machine by BID22.• multi-seq2seq −. Generation component, which only applies two-level attention strategies.• multi-seq2seq. Generation component, which applies two-level attention and copy strategy.• Ensemble(Retrieval-1,Retrieval-2, seq2seq). Ensemble with retrieval and seq2seq.• Ensemble(Retrieval-1, Retrieval-2, multi-seq2seq).Ensemble with retrieval and multi-seq2seq. This is the full proposed model ensemble. We evaluate our approach in terms of both subjective and objective metrics.• Subjective metric. Human evaluation, albeit time-and labor-consuming, conforms to the ultimate goal of open-domain conversation systems. We ask three educated volunteers to annotate the BID22; BID13; BID16 ). Annotators are asked to label either "0" (bad), "1" (borderline), or "2" (good) to a query-reply pair. The subjective evaluation is performed in a strictly random and blind fashion to rule out human bias.• Objective metric. We adopt BLEU 1-4 for the purpose of automatic evaluation. While further strongly argue that no existing automatic metric is appropriate for open-domain dialogs, they show a slight positive correlation between BLEU-2 and human evaluation in nontechnical Twitter domain, which is similar to our scenario. We nonetheless include BLEU scores as the expedient objective evaluation, serving as a supporting evidence. BLEUs are also used in BID12 for model comparison and in BID16 for model selection. Notice that, the automatic metrics were computed on the entire test set, whereas the subjective evaluation was based on 100 randomly chosen test samples due to the limitation of human resources. We present our main in TAB6. Table 4 presents two examples of our ensemble and its "base" models. As showed, the retrieval system, which our model ensemble is based on, achieves better performance than RNN-based sequence generation. This also verifies that the retrieval-based conversation system in our experiment is a strong baseline to compare with. Table 4: Examples of retrieved and generated ones. " √ " indicates the reply selected by the re-ranker. Combining the retrieval system, generative system multi-seq2seq and the re-ranker, our model leads to the bset performance in terms of both human evaluation and BLEU scores. Concretely, our model ensemble outperforms the state-of-the-practice retrieval system by +34.45% averaged human scores, which we believe is a large margin. Having verified that our model achieves the best performance, we are further curious how each gadget contributes to our final system. Specifically, we focus on the following research questions. What is the performance of multi-seq2seq (the First Ensemble in Figure 1) in comparison with traditional seq2seq?From BLEU scores in TAB6, we see both multi-seq2seq − and multi-seq2seq significantly outperform conventional seq2seq, and multi-seq2seq is slightly better than multi-seq2seq −. These imply the effectiveness of both two-level attention and copy mechanism. We can also see multi-seq2seq outperforms the second retrieval in BLEUs. In the retrieval and seq2seq ensemble, 72.84% retrieved and 27.16% generated ones are selected. In retrieval and multi-seq2seq ensemble, the percentage becomes 60.72% vs. 39.28%. The trend indicates that multi-seq2seq is better than seq2seq from the re-ranker's point of view. RQ2: How do the retrieval-and generation-based systems contribute to re-ranking (the Second Ensemble in Figure 1)? As the retrieval and generation module account for 60.72% and 39.28% in the final of retrieval and multi-seq2seq ensemble, they almost contribute equally to the whole framework. More importantly, we notice that retrieval-1 takes the largest proportion in two ensemble systems, and it may explain why most on-line chatting platforms choose retrieval methods to build their systems. multi-seq2seq decreases the proportion of retrieved one in the second ensemble systems, which indicates multi-seq2seq achieves better than retrieval-1 in some cases. Since the two ensembles are demonstrated to be useful already, can we obtain further gain by combining them together?We would also like to verify if the combination of multi-seq2seq and re-ranking mechanisms yields further gain in our ensemble. To test that, we compare the full model Ensemble(Retrieval, multi-seq2seq) with an ensemble that uses traditional seq2seq, i.e., Ensemble(Retrieval, seq2seq). As indicated in TAB6, even with the re-ranking mechanism, the ensemble with underlying multi-seq2seq still outperforms the one with seq2seq. Likewise, Ensemble(Retrieval, multi-seq2seq) outperforms both Retrieval and multi-seq2seq in terms of most metrics. Through the above ablation tests, we conclude that both components (first and second ensemble) play a role in our ensemble when we combine the retrieval-and generation-based systems. In this paper, we propose a novel ensemble of retrieval-based and generation-based open-domain conversation systems. The retrieval part searches the k best-matched candidate replies, which are, along with the original query, fed to an RNN-based multi-seq2seq reply generator. Then the generated replies and retrieved ones are re-evaluated by a re-ranker to find the final . Although traditional generation-based and retrieval-based conversation systems are isolated, we have designed a novel mechanism to connect both modules. The proposed ensemble model clearly outperforms state-of-the-art conversion systems in the constructed large-scale conversation dataset.
A novel ensemble of retrieval-based and generation-based for open-domain conversation systems.
990
scitldr
Deep neural networks trained on a wide range of datasets demonstrate impressive transferability. Deep features appear general in that they are applicable to many datasets and tasks. Such property is in prevalent use in real-world applications. A neural network pretrained on large datasets, such as ImageNet, can significantly boost generalization and accelerate training if fine-tuned to a smaller target dataset. Despite its pervasiveness, few effort has been devoted to uncovering the reason of transferability in deep feature representations. This paper tries to understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability. We demonstrate that 1) Transferred models tend to find flatter minima, since their weight matrices stay close to the original flat region of pretrained parameters when transferred to a similar target dataset; 2) Transferred representations make the loss landscape more favorable with improved Lipschitzness, which accelerates and stabilizes training substantially. The improvement largely attributes to the fact that the principal component of gradient is suppressed in the pretrained parameters, thus stabilizing the magnitude of gradient in back-propagation. 3) The feasibility of transferability is related to the similarity of both input and label. And a surprising discovery is that the feasibility is also impacted by the training stages in that the transferability first increases during training, and then declines. We further provide a theoretical analysis to verify our observations. The last decade has witnessed the enormous success of deep neural networks in a wide range of applications. Deep learning has made unprecedented advances in many research fields, including computer vision, natural language processing, and robotics. Such great achievement largely attributes to several desirable properties of deep neural networks. One of the most prominent properties is the transferability of deep feature representations. Transferability is basically the desirable phenomenon that deep feature representations learned from one dataset can benefit optimization and generalization on different datasets or even different tasks, e.g. from real images to synthesized images, and from image recognition to object detection . This is essentially different from traditional learning techniques and is often regarded as one of the parallels between deep neural networks and human learning mechanisms. In real-world applications, practitioners harness transferability to overcome various difficulties. Deep networks pretrained on large datasets are in prevalent use as general-purpose feature extractors for downstream tasks . For small datasets, a standard practice is to fine-tune a model transferred from large-scale dataset such as ImageNet to avoid over-fitting. For complicated tasks such as object detection, semantic segmentation and landmark localization, ImageNet pretrained networks accelerate training process substantially . In the NLP field, advances in unsupervised pretrained representations have enabled remarkable improvement in downstream tasks . Despite its practical success, few efforts have been devoted to uncovering the underlying mechanism of transferability. Intuitively, deep neural networks are capable of preserving the knowledge learned on one dataset after training on another similar dataset (; b; 2019). This is even true for notably different datasets or apparently different tasks. Another line of works have observed several detailed phenomena in the transfer learning of deep networks , yet it remains unclear why and how the transferred representations are beneficial to the generalization and optimization perspectives of deep networks. The present study addresses this important problem from several new perspectives. We first probe into how pretrained knowledge benefits generalization. Results indicate that models fine-tuned on target datasets similar to the pretrained dataset tend to stay close to the transferred parameters. In this sense, transferring from a similar dataset makes fine-tuned parameters stay in the flat region around the pretrained parameters, leading to flatter minima than training from scratch. Another key to transferability is that transferred features make the optimization landscape significantly improved with better Lipschitzness, which eases optimization. Results show that the landscapes with transferred features are smoother and more predictable, fundamentally stabilizing and accelerating training especially at the early stages of training. This is further enhanced by the proper scaling of gradient in back-propagation. The principal component of gradient is suppressed in the transferred weight matrices, controlling the magnitude of gradient and smoothing the loss landscapes. We also investigate a common concern raised by practitioners: when is transfer learning helpful to target tasks? We test the transferability of pretrained networks with varying inputs and labels. Instead of the similarity between pretrained and target inputs, what really matters is the similarity between the pretrained and target tasks, i.e. both inputs and labels are required to be sufficiently similar. We also investigate the relationship between pretraining epoch and transferability. Surprisingly, although accuracy on the pretrained dataset increases throughout training, transferability first increases at the beginning and then decreases significantly as pretraining proceeds. Finally, this paper gives a theoretical analysis based on two-layer fully connected networks. Theoretical consistently justify our empirical discoveries. The analysis here also casts light on deeper networks. We believe the mechanism of transferability is the fundamental property of deep neural networks and the in-depth understanding presented here may stimulate further algorithmic advances. There exists extensive literature on transferring pretrained representations to learn an accurate model on a target dataset. employed a brand-new label predictor to classify features extracted by the pre-trained feature extractor at different layers of AlexNet . showed deep features can benefit object detection tasks despite the fact that they are trained for image classification. introduced a selective joint fine-tuning scheme for improving the performance of deep learning tasks under the scenario of insufficient training data. The enormous success of the transferability of deep networks in applications stimulates empirical studies on fine-tuning and transferability. observed the transferability of deep feature representations decreases as the discrepancy between pretrained task and target task increases and gets worse in higher layers. Another phenomenon of catastrophic forgetting as discovered by describes the loss of pretrained knowledge when fitting to distant tasks. delved into the influence of ImageNet pretrained features by pretraining on various subsets of the ImageNet dataset. further demonstrated that deep models with better ImageNet pretraining performance can transfer better to target tasks. As for the techniques used in our analysis, Li et al. (2018a) proposed the impact of the scaling of weight matrices on the visualization of loss landscapes. proposed to measure the variation of loss to demonstrate the stability of loss function. provided a powerful framework of analyzing two-layer over-parametrized neural networks, with elegant and no strong assumptions on input distributions, which is flexible for our extensions to transfer learning. A basic observation of transferability is that tasks on target datasets more similar to the pretrained dataset have better performance. We delve deeper into this phenomenon by experimenting on a variety of target datasets (Figure 1), carried out with two common settings: 1) train only the last layer by fixing the pretrained network as the feature extractor and 2) train the whole network by fine-tuning from the pretrained representations. Results in Table 1 clearly demonstrate that, for both settings and for all target datasets, the training error converges to nearly zero while the generalization error varies significantly. In particular, a network pretrained on more similar dataset tends to generalize better and converge faster on the target dataset. A natural implication is that the knowledge learned from the pretrained networks can only be preserved to different extents for different target datasets. We substantiate this implication with the following experiments. To analyze to what extent the knowledge learned from pretrained dataset is preserved, for the fixing setting, we compute the Frobenius norm of the deviation between fine-tuned weight W and pretrained weight W 0 as 1 √ n W − W 0 F, where n denotes the number of target examples (for the fine-tuning setting, we compute the sum of deviations in all layers Results are shown in Figure 2 . It is surprising that although accuracy may oscillate, (a) Figure 2: The deviation of the weight parameters from the pretrained ones in the transfer process to different target datasets. For all datasets, can be preserved on target datasets more similar to ImageNet, yielding smaller Why is preserving pretrained knowledge related to better generalization? From the experiments above, we can observe that models preserving more transferred knowledge (i.e. yielding smaller It is reasonable to hypothesize that 1 √ n W −W 0 F is implicitly bounded in the transfer process, and that the bound is related to the similarity between pretrained and target datasets (We will formally study this conjecture in the theoretical analysis). Intuitively, a neural network attempts to fit the training data by twisting itself from the initialization point. For similar datasets the twist will be mild, with the weight parameters staying closer to the pretrained parameters. Such property of staying near the pretrained weight is crucial for understanding the improvement of generalization. Since optimizing deep networks inevitably runs into local minima, a common belief of deep networks is that the optimization trajectories of weight parameters on different datasets will be essentially different, leading to distant local minima. To justify whether this is true, we compare the weight matrices of training from scratch and using ImageNet pretrained representations in Figure 4. Results are quite counterintuitive. The local minima of different datasets using ImageNet pretraining are closed to each other, all concentrating around ImageNet pretrained weight. However, the local minima of training from scratch and ImageNet pretraining are way distant, even on the same dataset.. Surprisingly, weight matrices on the same dataset may be distant at convergence when using different initializations. On the contrary, even for discrepant datasets, the weight matrices stay close to the initialization when using the same pretrained parameters. This provides us with a clear picture of how transferred representations improve generalization on target datasets. Rich studies have indicated that the properties of local minima are directly related to generalization . Using pretrained representations restricts weight matrices to stay near the pretrained weight. Since the pretrained dataset is usually sufficiently large and of high-quality, transferring their representations will lead to flatter minima located in large flat basins. On the contrary, training from scratch may find sharper minima. To observe this, we adopt filter normalization (a) as the visualization tool, and illustrate the loss landscapes around the minima in Figure 3. This observation concurs well with the experiments above. The weight matrices for datasets similar to pretrained ones deviate less from pretrained weights and stay in the flat region. On more different datasets, the weight matrices have to go further from pretrained weights to fit the data and may run out of the flat region. A common belief of modern deep networks is the improvement of loss landscapes with techniques such as BatchNorm and residual structures . Li et al. (2018a); validated this improvement when the model is close to convergence. However, it is often overlooked that loss landscapes can still be messy at the initialization point. To verify this conjecture, we visualize the loss landscapes centered at the initialization point of the 25th layer of ResNet-50 in Figure 5. (Visualizations of the other layers can be found in Appendix B.4.) ImageNet pretrained networks have much smoother landscape than networks trained with random initialization. The improvement of loss landscapes at the initialization point directly gives rise to the acceleration of training. Concretely, transferred features help ameliorate the chaos of loss landscape with improved Lipschitzness in the early stages of training. Thus, gradient-based optimization method can easily escape from the initial region where the loss is very large. The properties of loss landscapes influence the optimization fundamentally. In randomly initialized networks, going in the direction of gradient may lead to large variation in the loss function. On the contrary, ImageNet pretrained features make the geometry of loss landscape much more predictable, and a step in gradient direction will lead to mild decrease of loss function. To demonstrate the impact of transferred features on the stability of loss function, we further analyze the variation of loss in the direction of gradient in Figure 6. For each step in the training process, we compute the gradient of the loss and measure how the loss changes as we move the weight matrix in that direction. We can clearly observe that in contrast to networks with transferred features, randomly initialized networks have larger variation along the gradient, where a step along the gradient leads to drastic change in the loss. Why can transferred features control the magnitude of gradient and smooth the loss landscape? A natural explanation is that transferred weight matrices provide appropriate transform of gradient in each layer and help stabilize its magnitude. Note that in deep neural networks, the gradient w.r.t. each layer is computed through back-propagation by, where I k i denotes the activation of x i at layer k. The weight matrices W k function as the scaling factor of gradient in back-propagation. Basically, a randomly initialized weight matrix will multiply the magnitude of gradient by its norm. In pretrained weight matrices, situation is completely different. To delve into this, we decompose the gradient into singular vectors and measure the projections of weight matrices in these principal directions. Results are shown in Figure 7 (c). During pretraining, the singular vectors of the gradient with large singular values are shrunk in the weight matrices. Thus, the magnitude of gradient back-propagated through a pretrained layer is controlled. In this sense, pretrained weight matrices stabilize the magnitude of gradient especially in lower layers. We visualize the magnitude and scaling of gradient of different layers in ResNet-50 in Figure 7. The gradient of randomly initialized networks grows fast with layer numbers during back-propagation while the gradient of ImageNet pretrained networks remains stable. Note that ResNet-50 already incorporates BatchNorm and skip-connections to improve the gradient flow, and pretrained representations can stabilize the magnitude of gradient substantially even in these modern networks. We complete this analysis by visualizing the change of landscapes during back-propagation in Section B.4. Transferring from pretrained representations boosts performance in a wide range of applications. However, as discovered by; , there still exist cases when pretrained representations provide no help for target tasks or even downgrade test accuracy. Hence, the conditions on which transfer learning is feasible is an important open problem to be explored. In this section, we delve into the feasibility of transfer learning with extensive experiments, while the theoretical perspectives are presented in the next section. We hope our analysis will provide insights into how to adopt transfer learning by practitioners. As a common practice, people choose datasets similar to the target dataset for pretraining. However, how can we determine whether a dataset is sufficiently similar to a target dataset? We verify with experiments that the similarity depends on the nature of tasks, i.e. both inputs and labels matter. Varying input with fixed labels. We randomly sample 600 images from the original SVHN dataset, and fine-tune the MNIST pretrained LeNet to this SVHN subset. For comparison, we pretrain other two models on MNIST with images upside down and Fashion-MNIST , respectively. Note that for all three pretrained models, the dataset sizes, labels, and the number of images per class are kept exactly the same, and thus the only difference lies in the image pixels themselves. Results are shown in Figure 8 (a). Compared to training from scratch, MNIST pretrained features improve generalization significantly. Upside-down MNIST shows slightly worse generalization performance than the original one. In contrast, fine-tuning from Fashion-MNIST barely improves generalization. We also compute the deviation from pretrained weight of each layer. The weight matrices and convolutional kernel deviation of Fashion-MNIST pretraining show no improvement over training from scratch. A reasonable implication here is that choosing a model pretrained on a more similar dataset in the inputs yields a larger performance gain. Varying labels with fixed input. We train a ResNet-50 model on Caltech-101 and then fine-tune it to Webcam . We train another ResNet-50 to recognize the color of the upper part of Caltech-101 images and fine-tune it to Webcam. Results in Figure 8 (b) indicate that the latter one provides no improvement over training on Webcam from scratch, while pretraining on standard Caltech-101 significantly boosts performance. Models generalizing very well on similar images are not transferable to the target dataset with totally different labels. These experiments challenge the common perspective of similarity between datasets. The description of similarity using the input (images) themselves is just one point. Another key factor of similarity is the relationship between the nature of tasks (labels). This observation is further in line with our theoretical analysis in Section 6. Currently, people usually train a model on ImageNet until it converges and use it as the pretrained parameters. However, the final model do not necessarily have the highest transferability. To see this, we pretrain a ResNet-50 model on Food-101 and transfer it to CUB-200, with shown in Figure 9. During the early epochs, the transferability increases sharply. As we continue pretraining, although the test accuracy on the pretraining dataset continues increasing, the test accuracy on the target dataset starts to decline, indicating downgraded transferability. Intuitively, during the early epochs, the model learns general knowledge that is informative to many datasets. As training goes on, however, the model starts to fit the specific knowledge of the pretrained dataset and even fit noise. Such dataset-specific knowledge is usually detrimental to the transfer performance. This interesting finding implies a promising direction for improving the de facto pretraining method: Instead of seeking for a model with higher accuracy only on the pretraining dataset, a more transferable model can be pretrained with appropriate epochs such that the fine-tuning accuracies on a diverse set of target tasks are advantageous. Algorithms for pretraining should take this point into consideration. We have shown through extensive empirical analysis that transferred features exert a fundamental impact on generalization and optimization performance, and provided some insights for the feasibility of transfer learning. In this section, we analyze some of our empirical observations from a theoretical perspective. We base our analysis on two-layer fully connected networks with ReLU activation and sufficiently many hidden units. Our theoretical are in line with the experimental findings. Denote by σ(·) the ReLU activation function, σ (z) = max{z, 0}. I{A} is the indicator function, i.e. I{A} = 1 if A is true and 0 otherwise. [m] is the set of integers ranging from 1 to m. Consider a two-layer ReLU network of m hidden units and W = (w 1, · · ·, w m) ∈ R d×m as the weight matrix. We are provided with n Q samples {x Q,i, y Q,i} n Q i=1 drawn i.i.d. from the target distribution Q as the target dataset and a weight matrix W(P) pretrained on n P samples {x P,i, y P,i} n P i=1 drawn i.i.d. from pretrained distribution P. Suppose x 2 = 1 and |y| ≤ 1. Our goal is transferring the pretrained W(P) to learn an accurate model W(Q) for the target distribution Q. When training the model on the pretraining dataset, we initialize the weight as: w r ∼ N (0, κ 2 I), a r ∼unif ({−1, 1}), where ∀r ∈ [m] and κ is a constant. For both pretraining and fine-tuning, the objective function of the model is the squared loss Note that a is fixed throughout training and W is updated with gradient descent. The learning rate is set to η. We base our analysis on the theoretical framework of , since it provides elegant on convergence of two-layer ReLU networks without strong assumptions on the input distributions, facilitating our extension to the transfer learning scenarios. In our analysis, we use the Gram matrices H ∞ P ∈ R n P ×n P and H ∞ Q ∈ R n Q ×n Q to measure the quality of pretrained input and target input as To quantify the relationship between pretrained input and target input, we define the following Gram matrix H ∞ P Q ∈ R n P ×n Q across samples drawn from P and Q: Assume Gram matrices H ∞ P and H ∞ Q are invertible with smallest eigenvalue λ P and λ Q greater than zero. H ∞ P −1 y P characterizes the labeling function of pretrained tasks. y P →Q H ∞ P Q H ∞ P −1 y P further transforms the pretrained labeling function to the target labels. A critical point in our analysis is y Q − y P →Q, which measures the task similarity between target label and transformed label. To analyze the Lipschitzness of loss function, a reasonable objective is the magnitude of gradient, which is a direct manifestation of the Lipschitz constant. We analyze the gradient w.r.t. the activations. For the magnitude of gradient w.r.t. the activations, we show that the Lipschitz constant is significantly reduced when the pretrained and target datasets are similar in both inputs and labels. Theorem 1 (The effect of transferred features on the Lipschitzness of the loss). Denote by X 1 the activations in the target dataset. For a two-layer networks with sufficiently large number of hidden unit m defined in Section 6.1, if m ≥ poly(n P, n Q, δ −1, λ −1 with probability no less than 1 − δ over the random initialization, This provides us with theoretical explanation of experimental in Section 4. The control of Lipschitz constant relies on the similarity between tasks in both input and labels. If the original target label is similar to the label transformed from the pretrained label, i.e. y Q − y P →Q 2 2 is small, the Lipschitzness of loss function will be significantly improved. On the contrary, if the pretrained and target tasks are completely different, the transformed label will be discrepant with target label, ing in larger Lipschitz constant of the loss function and worse landscape in the fine-tuned model. Recall in Section 3 that we have investigated the weight change W(Q) − W(P) F during training and point out the role it plays in understanding the generalization. In this section, we show that W(Q)−W(P) F can be bounded with terms depicting the similarity between pretrained and target tasks. Note that the Rademacher complexity of the function class is bounded with W(Q)−W(P) F as shown in the seminal work , thus the generalization error is directly related to W(Q) − W(P) F. We still use the Gram matrices defined in Section 6.1. Theorem 2 (The effect of transferred features on the generalization error). For a two-layer networks with m ≥ poly(n P, n Q, δ −1, λ −1, with probability no less than 1 − δ over the random initialization, This is directly related to the generalization error and casts light on our experiments in Section 5.1. Note that when training on the target dataset from scratch, the upper bound of By fine-tuning from a similar pretrained dataset where the transformed label is close to target label, the generalization error of the function class is hopefully reduced. On the contrary, features pretrained on discrepant tasks do not transfer to classification task in spite of similar images since they have disparate labeling functions. Another example is fine-tuning to Food-101 as in the experiment of . Since it is a fine-grained dataset with many similar images, H ∞ Q will be more singular than common tasks, ing in a larger deviation from the pretrained weight. Hence even transferring from ImageNet, the performance on Food-101 is still far from satisfactory. Why are deep representations pretrained from modern neural networks generally transferable to novel tasks? When is transfer learning feasible enough to consistently improve the target task performance? These are the key questions in the way of understanding modern neural networks and applying them to a variety of real tasks. This paper performs the first in-depth analysis of the transferability of deep representations from both empirical and theoretical perspectives. The reveal that pretrained representations will improve both generalization and optimization performance of a target network provided that the pretrained and target datasets are sufficiently similar in both input and labels. With this paper, we show that transfer learning, as an initialization technique of neural networks, exerts implicit regularization to restrict the networks from escaping the flat region of pretrained landscape. In this section, we provide details of the architectures, setup, methods of visualizations in our analysis. The codes and visualizations are attached with the submission and will be made available online. We implement all models on PyTorch with 2080Ti GPUs. For object recognition and scene recognition tasks, we use standard ResNet-50 from torchvision. ImageNet pretrained models can be found in torchvision, and Places pretrained models are provided by . During fine-tuning we use a batch size of 32 and set the initial learning rate to 0.01 with 0.9 momentum following the protocol of (b). For fine-tuning, we train the model for 200 epochs. We decay the learning rate by 0.1 with the time of decay set by cross validation. In Figure 2 (a) where the pretrained ResNet-50 functions as feature extractor, the downstream classifier is a two-layer ReLU network with Batch-Norm and Leaky-ReLU non-linearity. The number of hidden unit is 512. For this task, the backbone ResNet-50 is fixed, with the downstream two-layer classifier trained with momentum SGD. The learning rate is set to 0.01 with 0.9 momentum, and remains constant throughout training. For digit recognition tasks, we use LeNet . The learning rate is also set to 0.01, with 5 × 10 −4 weight decay. The batch-size is set to 64. We train the model for 100 epochs. Fine-tuning. We follow the protocol of fine-tuning as in the previous paragraphs. In Tables 1 and 2, we run all the experiments for 3 times and report their mean and variance. For Table 2, the improvement of fine-tuing is calculated with the generalization error of fine-tuning divided by the generalization error of training from scratch. Visualization of loss landscapes. We use techniques similar to filter normalization to provide an accurate analysis of loss landscapes (a). Note that ReLU networks are invariant to the scaling of weight parameters. To remove this scaling effect, the direction used in visualization should be normalized in a filter-wise way. Concretely, the axes of each landscape figure are two random Gaussian orthogonal vectors normalized by the scale of each filter in the convolutional layers. Concretely, suppose the parameter of the center point is θ. θ i,j denotes the j-th filter of the i-th layer. Suppose the two unit orthogonal vectors are a and b. Then with filter normalization, a i,j ← ai,j ai,j θ i,j and b i,j ← bi,j bi,j θ i,j. For each point (i.e. pixel) (p, q) in the plot, the value is evaluated with g(p, q) = L(f (θ + η(pa + qb))), where L denotes the loss function, f denotes the neural networks. η is a parameter to control the scale of the plot. In all visualization images of ResNet-50, the resolution is 200 × 200, i.e. p = −100, −99, · · ·, 98, 99 and q = −100, −99, · · ·, 98, 99. For additional details of filter normalization, please refer to (a). η is set to 0.001, which is of the same order as 10 times the step size in training. This is a reasonable scale if we want to study the local loss landscape of model using SGD. For fair comparison between the pretrained landscapes and randomly initialized landscapes, the scale of loss variation in each plot is exactly the same. The difference of loss value between each contour is 0.05. When we compute the loss landscape of one layer, the parameters of other layers are fixed. The gradient is computed based on 256 fixed samples since the gradient w.r.t. full dataset requires too much computation. Figure 3 and Figure 10 are centered at the final weight parameters, while others are centered at the initialization point to show the situation when training just starts. We visualize the loss landscapes on CUB-200, Stanford Cars and Food-101 for multiple times and reach consistent . But due to space limitation, we only show the on one dataset for each experiment in the main paper. Other are deferred to Section B. Computing the eigenvalue of Hessian. We compute the eigenvalues of Hessian with Hessianvector product and power methods based on the autograd of PyTorch. A similar implementation is provided by. We only list top 20 eigenvalues in limited space. t-SNE embedding of model parameters. We put the weight matrices of ResNet-50 in one vector as input. For faster computation, we pre-compute the distance between parameters of every two models with PyTorch and then use the distance matrix to compute the t-SNE embedding with scikitlearn. Note that we use the same ImageNet model from torchvision and the same Places model from for fine-tuning. Variation of loss function in the direction of gradient. Based on the original trajectory of training, we take steps in the direction of gradient from parameters at different steps during training to calculate the maximum changes of loss in that direction. The step size is set to the size of gradient. We take 100 steps from the original trajectories to measure the local property of loss landscapes. We aim to quantify the stability of loss functions and directly show the magnitude of gradient with this experiments on different datasets. Results on CUB-200 are provided in the main paper, with additional further provided in Section B. Not that this experiment is inspired by. We use the similar protocol as Section 3.2 in. Another protocol is to fix the step size along the gradient and compute the maximum variation of loss. Results on Stanford Cars with this protocol are provided in Section B.2. Results for both scenarios are similar. Figure 11: Variation of the loss in ResNet-50 with ImageNet pretrained weight and random initialization. We compare the variation of loss function in the direction of gradient during the training process on Stanford Cars dataset. The variation of pretrained networks is substantially smaller than the randomly initialized one, implying a more desirable loss landscape and more stable optimization. To validate that the generalization error is indeed improved with pretraining, for each dataset, we list the generalization error and the norm of deviation from the pretrained parameters in Table 2. The decreased percentage is calculated by dividing the error reduced in fine-tuning with the error of training from scratch. Compared to the of fine-tuning, we observe that ImageNet pretraining improves the generalization performance of general coarse-grained classification tasks significantly, yet the performance boost is smaller for fine-grained tasks which are dissimilar in the sense of task with ImageNet. Note that, although Stanford Cars and CUB-200 are visually similar to ImageNet, what really matters is the similarity between the nature of tasks, i.e. both images and labels matter. We visualize the loss landscape of 25-48th layers in ResNet-50 on Food-101 dataset. We compare the landscapes centered at the initialization point of randomly initialized and ImageNet pretrained networks -see Figure 12 and Figure 13. Results are in line with our observations of the magnitude of gradient in Figure 7. At higher layers, the landscapes of random initialization and ImageNet pretraining are similar. However, as the gradient is back-propagated through lower layers, the landscapes of pretrained networks remain as smooth as the higher layers. In sharp contrast, the landscapes of randomly initialized networks worsen through the lower layers, indicating that the magnitude of gradient is substantially worsened in back-propagation. Figure 12: Landscapes centered at the initialization point of each layer in ResNet-50 using ImageNet pretrained weight. The smoothness of landscapes in each layer are nearly identical, indicating a proper scaling of gradient. Figure 13: Landscapes centered at the initialization point of each layer in ResNet-50 initialized randomly. At the higher layers, the landscapes tend to be smooth. However, as the gradient is propagated to lower layers, the landscapes are becoming full of ridges and trenches in spite of the presence of Batch-Norm and skip connections. To study how transferring pretrained knowledge helps target tasks, we first study the trajectories of weight matrices during pretraining and then analyze its effect as an initialization in target tasks. Our analysis is based on's framework for over-parametrized networks. For the weight matrix W, W denotes the random initialization. W P (k) denotes W at the kth step of pretraining. W(P) denotes the pretrained weight matrix after training K steps. W Q (k) denotes the weight matrix after K steps of fine-tuning from W(P). For other terms, the notation at each step is similar. We first analyze the pretraining process on the source datasets based on. Define a matrix Z P ∈ R md×n P which is crucial to analyzing the trajectories of the weight matrix during pretraining, where I P i,j = I{w i x P,j ≥ 0}. Z P (k) denotes the matrix corresponding to W P (k). Note that the gradient descent is carried out as where vec (·) denotes concatenating a column of a matrice into a single vector. Then in the K iterations of pretraining on the source dataset, The first term is the primary component in the pretrained matrix, while the second and third terms is small under the over-parametrized conditions. Now following , the magnitude of these terms can be bounded with probability no less than 1 − δ, Here we also provide lemmas from which are extensively used later., with probability at least 1 − δ over the random initialization we have Lemma 2. If w 1,..., w m are i.i.d. generated from N (0, I), then with probability at least 1 − δ, the following holds. For any set of weight vectors w 1,..., w m ∈ R d that satisfy for any r ∈ [m], w r − w r 2 ≤ cδλ0 n 2 R for some small positive constant c, then the matrix H ∈ R n×n defined by Now we start to analyze the influence of pretrained weight on target tasks. 1) We show that during pretraining, 2) Then we analyze u Q (P)−u Q with the properties of H ∞ P Q. 3) Standard calculation shows the magnitude of gradient relates closely to u Q (P) − u Q, and we are able to find out how is the magnitude of gradient improved. To start with, we analyze the properties of the matrix H ∞ P Q. We show that under over-parametrized conditions, H ∞ P Q is close to the randomly initialized Gram matrix Z P Z Q (P). Use H P Q to denote Z P Z Q, and H P Q (P) to denote Z P Z Q (P). Lemma 3. With the same condition as lemma 1, with probability no less than 1 − δ, where with a small c. Since w r is independent of x Q,i and x Q,i 2 = 1, the distribution of w r x Q,i and w r are the same Gaussian. Applying Markov's inequality, and noting that we have with probability no less than 1 − δ, Also note that E[H P Q,ij] = H ∞ P Q,ij. By Hoeffding's inequality, we have with probability at least 1 − δ, Combining equation 12 and equation 11, we have with probability at least 1 − δ, Denote by u Q (P), u Q the output on the target dataset using weight matrix W(P) and W 0 respectively. First, we compute the gradient with respect to the activations, It is obvious from equation 14 that u Q (P) − u Q should become the focus of our analysis. To calculate u Q (P) − u Q, we need to sort out how the activations change by initializing the target networks with W(P) instead of W. For each x Q,i, divide r into two sets to quantify the change of variation in activations on the target dataset. where For r in S i, we can estimate the size of S i. Note that, since the distribution of w r is Gaussian with mean 0 and covariance matrix κ 2 I. Therefore, taking sum over all i and m and using Markov inequality, with probability at least 1 − δ over the random initialization we have Thus, this part of activations is the same for W and W(P) on the target dataset. For each x Q,i, where The first term is the primary part, while we can show that the second and the third term can be bounded with where 1 and 2 correspond to each of the second term and third term in equation 18. Thus, using lemma 1 and the estimation of |S i |, with probability no less than 1 − δ, Now equipped with equation 6, equation 19, equation 20 and lemma 3, we are ready to calculate exactly how much pretrained wight matrix W(P) help reduce the magnitude of gradient over W,, and u Q 2 are all small values we have estimated above. Therefore, using Z P F ≤ √ n P and we can control the magnitude of the perturbation terms under over-parametrized conditions. Concretely, with probability at least 1 − δ over random initialization, Substituting these estimations into equation 21 completes the proof of Theorem 1. In this subsection, we analyze the impact of pretrained weight matrix on the generalization performance. First, we show that a model will converge if initialized with pretrained weight matrix. Based on this, we further investigate the trajectories during transfer learning and bound W − W(P) F with the relationship between source and target datasets. we set the number of hidden nodes m = Ω, and the learning then with probability at least 1 − δ over the random initialization we have for k = 0, 1, 2,... The following lemma is a direct corollary of Theorem 3 and lamma 1, and is crucial to analysis to follow. Lemma 4. Under the same conditions as Theorem 3, with probability at least 1 − δ over the random initialization we have ∀r ∈ [m], ∀k ≥ 0, We have the estimation of w Q,r (k) − w r 2 from lemma 1. From w Q,r (k) − w r 2 ≤ w Q,r (k) − w r (P) 2 + w r (P) − w r 2, we can proove lemma 4 by estimating w Q,r (k) − w r (P) 2., and u Q (P) − u Q = Z Q (Z PH ∞ P −1 y P + ) + 1 + 2. Substituting lemma 3, equation 6, and equation 22 into u Q (P) − u Q 2 completes the proof. Now we start to prove Theorem 3 by induction. We have the following corollary if condition 1 holds, Corollary 1. If condition 1 holds for k = 0,..., k, for every r ∈ [m], with probability at least 1 − δ, w Q,r (k) − w r 2 ≤ 4 √ n P y P − u P 2 √ mλ P + 4 √ n Q y Q − u Q (P) 2 √ mλ Q R. If k = 0, by definition Condition 1 holds. Suppose for k = 0,..., k, condition 1 holds and we want to show it still holds for k = k + 1. The strategy is similar to the proof of convergence on training from scratch. By classifying the change of activations into two categories, we are able to deal with the ReLU networks as a perturbed version of linear regression. We define the event A ir = w r x Q,i ≤ R, where R = where we notice max k>0 k 1 − The first term is the primary part, while the second and the third are considered perturbations and could be controlled using lemma 6 and equation 38. since Z Q (k) − Z Q F is bounded, the maximum eigenvalue of H ∞ Q is λ −1 which completes the proof.
Understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability.
991
scitldr
We address the following question: How redundant is the parameterisation of ReLU networks? Specifically, we consider transformations of the weight space which leave the function implemented by the network intact. Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, and positive scaling of all incoming weights of a neuron coupled with inverse scaling of its outgoing weights. In this work, we show for architectures with non-increasing widths that permutation and scaling are in fact the only function-preserving weight transformations. For any eligible architecture we give an explicit construction of a neural network such that any other network that implements the same function can be obtained from the original one by the application of permutations and rescaling. The proof relies on a geometric understanding of boundaries between linear regions of ReLU networks, and we hope the developed mathematical tools are of independent interest. Ever since its early successes, deep learning has been a puzzle for machine learning theorists. Multiple aspects of deep learning seem at first sight to contradict common sense: single-hidden-layer networks suffice to approximate any continuous function , yet in practice deeper is better; the loss surface is highly non-convex, yet it can be minimised by first-order methods; the capacity of the model class is immense, yet deep networks tend not to overfit . Recent investigations into these and other questions have emphasised the role of overparameterisation, or highly redundant function representation. It is now known that overparameterised networks enjoy both easier training (; ;), and better generalisation (; ;). However, the specific mechanism by which over-parameterisation operates is still largely a mystery. In this work, we study one particular aspect of over-parameterisation, namely the ability of neural networks to represent a target function in many different ways. In other words, we ask whether many different parameter configurations can give rise to the same function. Such a notion of parameterisation redundancy has so far remained unexplored, despite its potential connections to the structure of the loss landscape, as well as to the literature on neural network capacity in general. Specifically, we consider feed-forward ReLU networks, with weight matrices W 1,..., W L, and biases b 1,..., b L,. We study parameter transformations which preserve the output behaviour of the network h(z) = W L σ(W L−1 σ(. . . W 1 z + b 1 . . .) + b L−1 ) + b L for all inputs z in some domain Z. Two such transformations are known for feed-forward ReLU architectures: 1. Permutation of units (neurons) within a layer, i.e. for some permutation matrix P, 2. Positive scaling of all incoming weights of a unit coupled with inverse scaling of its outgoing weights. Applied to a whole layer, with potentially different scaling factors arranged into a diagonal matrix M, this can be written as Our main theorem applies to architectures with non-increasing widths, and shows that there are no other function-preserving parameter transformations besides permutation and scaling. Stated formally: Theorem 1. Consider a bounded open nonempty domain Z ⊆ R d0 and any architecture For this architecture, there exists a ReLU network h θ: Z → R, or equivalently a setting of the weights θ (W 1, b 1, . . ., W L, b L), such that for any'general' ReLU network h η: Z → R (with the same architecture) satisfying h θ (z) = h η (z) for all z ∈ Z, there exist permutation matrices P 1,... P L−1, and positive diagonal matrices M 1,..., M L−1, such that where η (W 1, b 1, . . ., W L, b L) are the parameters of h η. In the above,'general' networks is a class of networks meant to exclude degenerate cases. We give a more precise definition in Section 3; for now it suffices to note that almost all networks are general. The proof of the relies on a geometric understanding of prediction surfaces of ReLU networks. These surfaces are piece-wise linear functions, with non-differentiabilities or'folds' between linear regions. It turns out that folds carry a lot of information about the parameters of a network, so much in fact, that some networks are uniquely identified (up to permutation and scaling) by the function they implement. This is the main insight of the theorem. In the following sections, we introduce in more detail the concept of a fold-set, and describe its geometric structure for a subclass of ReLU networks. The paper culminates in a proof sketch of the main . The full proof, including proofs of intermediate , is included in the Appendix. The functional equivalence of neural networks is a well-researched topic in classical connectionist literature. The problem was first posed by , and soon resolved for feed-forward networks with the tanh activation function by , who showed that any smooth transformation of the weight space that preserves the function of all neural networks is necessarily a composition of permutations and sign flips. For the same class of networks, showed a somewhat stronger : knowledge of the input-output mapping of a neural network determines both its architecture and its weights, up to permutations and sign flips. Similar have been proven for single-layer networks with a saturating activation function such as sigmoid or RBF (Kůrková &), as well as single-layer recurrent networks with a smooth activation function (a; b). To the best of our knowledge, no such theoretical exist for networks with the ReLU activation, which is non-saturating, asymmetric and non-smooth. Broadly related is the recent work by and who study whether two neural networks (ReLU or otherwise) that are close in the functional space have parameterisations that are close in the weight space. This is called inverse stability. In contrast, we are interested in ReLU networks that are functionally identical, and ask about all their possible parameterisations. In terms of proof technique, our approach is based on the geometry of piece-wise linear functions, specifically the boundaries between linear regions. The intuition for this kind of analysis has previously been presented by and , and somewhat similar proof techniques to ours have been used by in the context of counting the number of linear decision regions. Finally, the sets of equivalent parametrisations can be viewed as symmetries in the weight space, with implications for optimisation. Multiple authors, including e.g.;; , have observed that the naive loss gradient is sensitive to reparametrisation by scaling, and proposed alternative, scaling-invariant optimisation procedures. We will omit the subscript θ when it is clear from the context. In this work, we restrict our attention to so-called general ReLU networks. Intuitively, a general network is one that satisfies a number of non-degeneracy properties, such as all weight matrices having non-zero entries and full rank, no two network units exactly cancelling each other out, etc. It can be shown 1 that almost all ReLU networks are general. In other words, a sufficient condition for a ReLU network to be general with probability one is that its weights are sampled from a distribution with a density. More formally, a general ReLU network is one that satisfies the following three conditions. 1. For any unit (l, i), the local optima of h 1:l i do not have value exactly zero. 2. For all k ≤ l and all diagonal matrices (I k, . . ., I l) with entries in {0, 1}, General networks are convenient to study, as they exclude many degenerate special cases. The second important class of ReLU networks are so-called transparent networks. Their significance as well as their name will become clear in the next section. For now, we state the definition. In words, we require that for any input, at least one unit on each layer is active. In this section we introduce the concept of fold-sets, which is key to our understanding of ReLU networks and their prediction surfaces. Since ReLU networks are piece-wise linear functions, a great deal about them is revealed by the boundaries between individual linear regions. A network's fold-set is simply the union of all these boundaries. More formally, if Z is an open set, and f: Z → R is any continuous, piece-wise linear function, we define the fold-set of f, denoted by F(f), as the set of all points at which f is non-differentiable. It turns out there is a class of networks whose fold-sets are especially easy to understand; these are the ones we have termed transparent. For transparent networks, we have the following characterisation of the fold-set (which also motivates the name 'transparent'). To appreciate the significance of the lemma, suppose we are given some transparent ReLU network function h and we want to infer its parameters. This lemma shows that the knowledge of the endto-end mapping h h 1:L in fact gives us information about the network's hidden units h 1:l i (hence 'transparent'). Moreover, this information is very explicit: we observe the units' zero-level sets, which in the case of a linear unit on a full-dimensional space already determines the unit's parameters up to scaling 2. Of course, dealing with piece-wise linearity and disambiguating the union into its constituent zero-level sets remains a challenge for upcoming sections. In this section, we provide a geometric description of fold-sets of transparent networks. Intuitively, the fold-sets look like the sets shown in Figure 1. The first-layer units of a network are linear, so the component i z | h 1:1 i (z) = 0 of the fold-set is a union of hyperplanes, illustrated by the blue lines in Figure 1. These hyperplanes partition the input space into a number of regions that each correspond to a different activation pattern. For a fixed activation pattern, or equivalently on each region, the second-layer units are linear, so their zero-level sets i z | h 1:2 i (z) = 0 are composed of piece-wise hyperplanes on the partition induced by the first-layer units. This is shown by the orange lines in Figure 1. More generally, the l th -layer zero-level sets i z | h 1:l i (z) = 0 consist of piece-wise hyperplanes on the partition induced by all lower-layer units. This yields a fold-set that looks like the set in the right pane of Figure 1, but potentially much more complicated. We now define these concepts more precisely. Piece-wise hyperplane. Let P be a partition of Z. We say H ⊆ Z is a piece-wise hyperplane with respect to partition P, if H is nonempty and there exist (w, b) = and P ∈ P such that H = {z ∈ P | w z + b = 0}. The final ingredient we will need to be able to reason about the parameterisation of ReLU networks is a more precise characterisation of the fold-set, in particular, the dependence structure between individual piece-wise hyperplanes. For example, consider the piece-wise linear surface in Figure 1 and compare it to the one in Figure 2. Suppose as before that the blue hyperplanes come from first-layer units, the orange hyperplanes come from second-layer units, and the black hyperplanes come from third-layer units. The difference between Figure 1 and Figure 2 is that if we observe only the fold-set, i.e. only the union of the zero-level sets over all layers (as shown in the right pane of Figure 2), then in the case of Figure 2, it is impossible to know which folds come from which layers. For instance, the blue folds and the orange folds could be assigned to the first and second layer almost arbitrarily; there is not enough information (i.e. intersection) in the fold-set to tell which is which. In contrast, the piece-wise linear surface in the right pane of Figure 1 could in principle be disambiguated into first-, second-and third-layer folds by the following procedure: 1. Take the largest possible union of hyperplanes that is a subset of the fold-set, and assign the hyperplanes to layer one. 2. Take all piece-wise hyperplanes with respect to the partition induced by the first-layer folds, and assign them to layer two. 3. Take all piece-wise hyperplanes with respect to the partition induced by the first-and second-layer folds, and assign them to layer three. This procedure is not guaranteed to assign all folds to their original layers because it ignores how piece-wise hyperplanes are connected; for example for the piece-wise linear surface in Figure 1, the procedure yields the layer assignment shown in Figure 3. However, it is sufficient for our purposes, and it is easier to work with mathematically. Formally, for a piece-wise linear surface S, we denote k S:= {S ⊆ S | S is a piece-wise linear surface of order at most k}. One can show 4 that k S is itself a piece-wise linear surface of order at most k, so one can think of k S as the'largest possible' subset of S that is a piece-wise linear surface of order at most k. For the piece-wise linear surface in Figure 3, the set 1 S consists of the blue hyperplanes, 2 S consists of the blue and the orange (piece-wise) hyperplanes, and 3 S = S. This definition allows us to uniquely decompose S into its piece-wise hyperplanes. Let S = l∈[κ],i∈[n l] H l i be any representation of S in terms of its piece-wise hyperplanes. We say the rep-. One can show 5 that such a representation exists and is unique up to subscript indexing. Importantly, it assigns a unique'layer' to each piece-wise hyperplane, its superscript. In other words, for architectures with non-increasing widths, there exists a ReLU network h such that knowledge of the input-output mapping h determines the network's parameters uniquely up to permutation and scaling. The idea behind the proof is as follows. Suppose we are given the function h. Then we also know its fold-set F(h), and if h is general and transparent, the fold-set is a piece-wise linear surface (by Lemma 2) of the form As we have mentioned earlier, this union of zero-level sets contains a lot of information about the network's parameters, provided we can disambiguate the union to obtain the zero-level sets of individual units. This disambiguation of the union is crucial, but is impossible in general. To see why, consider the first-layer units: given F(h), we want to identify i z | h 1:1 is a union of d 1 hyperplanes, we are done. In general however, F(h) may contain more than d 1 hyperplanes, such as for example in Figure 2. In such a setting it is impossible to tell which hyperplanes come from the first layer. The key insight here is the following: even though, say, a last-layer unit can create a fold that looks like a hyperplane, this hyperplane cannot have any dependencies, or descendants in the dependency graph. This follows from the fact that the layer is the last. More generally, if a (piece-wise) hyperplane has a chain of descendants of length m, it must come from a layer that is at least m layers below the last one. Formally, we have the following lemma. Lemma 3. Let h: Z → R be a general ReLU network. Denote S: Main proof idea. This lemma motivates the main idea of the proof. We explicitly construct a network h such that the dependency graph of its fold-set is well connected. More precisely, we ensure that each of the hyperplanes corresponding to first-layer units has a chain of descendants of length L − 2. This implies by Lemma 3 that the first-layer hyperplanes can be identified as such, using only the information contained in the fold-set. One can show that this is sufficient to recover the parameters W 1, b 1, up to permutation and scaling. To extend the argument to higher-layers, we then consider the truncated network h l:L. In h l:L, layer l becomes the first layer, and we apply the same reasoning as above to recover W l, b l. The next lemma shows that a network with a'well connected' dependency graph exists. In what follows, f | A denotes the restriction of a function f to a domain A, and Z for all i. One can show 6 that this implies the existence of scalars m 1,... We know that We have thus shown that there exists a permutation matrix P l ∈ R d l ×d l and a nonzero-entry diagonal matrix One can also show that the scalars m i are positive. For the inductive step, let l ∈ {2, . . ., L − 1}, and assume that there exist permutation matrices P 1,..., P l−1, and positive-entry diagonal matrices M 1,..., M l−1, such that holds up to layer l − 1. Then h. Since the end-to-end mappings are the same, h whereη:= (We therefore apply the same argument to h as we presented above for the case l = 1. We obtain that there exists a permutation matrix P l ∈ R d l ×d l and a positive-entry diagonal matrix Finally, consider the last layer. We know that h Discussion of assumptions. Most of the theorem's assumptions have their origin in Lemma 4. The reason we restrict the domain of h l:L to the interior of Z l−1 is that we want h l:L to be defined on an open set (otherwise fold-sets become unwieldy). For similar reasons, we study only architectures with non-increasing widths; otherwise int Z l−1 may be empty. We conjecture that the theorem does not hold for more general architectures. If it does, the proof will likely go beyond fold-sets. To guarantee transparency, our construction is such that for each input z ∈ Z and layer l ∈ [L − 1], either h 1:l 1 (z) > 0 or h 1:l 2 (z) > 0. Transparency could in principle be achieved with just a single unit, but it would have to be positive everywhere. This is why we impose d l ≥ 2. Guaranteeing transparency for the first layer (whose inputs are not constrained to the positive quadrant) also necessitates boundedness of Z. Boundedness can be lifted if we consider a slightly modified definition of transparency; proofs become more complicated though and we do not consider this crucial. Almost all of the proof carries over to the case of leaky ReLU activations (where σ is defined as σ(u) i = max {αu i, u i} for some small α > 0). The part that does not carry over is our proof that M l has only positive entries on the diagonal: In this part, we compare the slope of h l:L θ for inputs on the positive and negative side of a given ReLU unit, and notice that the negative-side slope is'singular' in the sense that some basis directions have zero magnitude. This particular argument does not work for the leaky ReLU, though we cannot rule out that a simple workaround exists. In this work, we have shown that for architectures with non-increasing widths, certain ReLU networks are almost uniquely identified by the function they implement. The suggests that the function-equivalence classes of ReLU networks are surprisingly small, i.e. there may be only little redundancy in the way ReLU networks are parameterised, contrary to what is commonly believed. This apparent contradiction could be explained in a number of ways: • It could be the case that even though exact equivalence classes are small, approximate equivalence is much easier to achieve. That is, it could be that h θ − h η ≤ is satisfied by a disproportionately larger class of parameters η than h θ − h η = 0. This issue is related to the so-called inverse stability of the realisation map of neural nets, which is not yet well understood. • Another possibility is that the kind of networks we consider in this paper is not representative of networks typically encountered in practice, i.e. it could be that'typical networks' do not have well connected dependency graphs, and are therefore not easily identifiable. • Finally, we have considered only architectures with non-increasing widths, whereas some previous theoretical work has assumed much wider intermediate layers compared to the input dimension. It is possible that parameterisation redundancy is much larger in such a regime compared to ours. However, gains from over-parameterisation have also been observed in practical settings with architectures not unlike those considered here. We consider these questions important directions for further research. We also hypothesise that our analysis could be extended to convolutional and recurrent networks, and to other piece-wise linear activation functions such as leaky ReLU. Definition A.1 (Partition). Let S ⊆ Z. We define the partition of Z induced by S, denoted P Z (S), as the set of connected components of Z \ S. Definition A.2 (Piece-wise hyperplane). Let P be a partition of Z. We say H ⊆ Z is a piece-wise hyperplane with respect to partition P, if H = ∅ and there exist (w, b) = and P ∈ P such that H = {z ∈ P | w z + b = 0}. Definition A.3 (Piece-wise linear surface / pwl. surface). A set S ⊆ Z is called a piece-wise linear surface on Z of order κ if it can be written as, and no number smaller than κ admits such a representation. Lemma A.1. If S 1, S 2 are piece-wise linear surfaces on Z of order k 1 and k 2, then S 1 ∪ S 2 is a piece-wise linear surface on Z of order at most max {k 1, k 2}. We can write H Given sets Z and S ⊆ Z, we introduce the notation (The dependence on Z is suppressed.) By Lemma A.1, i S is itself a pwl. surface on Z of order at most i. Lemma A.2. For i ≤ j and any set S, we have i j S = j i S = i S. Proof. We will need these definitions: j S = {S ⊆ S | S is a pwl. surface of order at most j}, i j S = {S ⊆ j S | S is a pwl. surface of order at most i}, surface of order at most j}. Consider first the equality j i S = i S. We know that j i S ⊆ i S because the square operator always yields a subset. At the same time, i S ⊆ j i S, because i S satisfies the condition for membership in. To prove the equality i j S = i S, we use the inclusion j S ⊆ S to deduce i j S ⊆ i S. Now let S ⊆ S be one of the sets under the union in, i.e. it is a pwl. surface of order at most i. Then it is also a pwl. surface of order at most j, implying S ⊆ j S. This means S is also one of the sets under the union in, proving that i S ⊆ i j S. Lemma A.3. Let Z and S ⊆ Z be sets. Then one can write k+1 S = k S ∪ i H i where H i are piece-wise hyperplanes wrt. P Z (k S). Proof. At the same time, is a pwl. surface of order at most k + 1 because k S is a pwl. surface of order at most k and H k+1 i can be decomposed into piece-wise hyperplanes wrt. Definition A.4 (Canonical representation of a pwl. surface). Let S be a pwl. surface on Z. The pwl. is a pwl. surface in canonical form, then κ is the order of S. Proof. Denote the order of S by λ. By the definition of order, λ ≤ κ, and S = λ S. Then, since It follows that κ = λ. Lemma A.5. Every pwl. surface has a canonical representation. Proof. The inclusion l∈[k],i∈[n l] H l i ⊆ k S holds for any representation. We will show the other inclusion by induction in the order of S. If S is order one, 1 S ⊆ S = i∈[n1] H 1 i holds for any representation and we are done. Now assume the lemma holds up to order κ − 1, and let S be order κ. Then by Lemma A.3, S = κ S = κ−1 S ∪ i H κ i, where H κ i are piece-wise hyperplanes wrt. P Z (κ−1 S). By the inductive assumption, κ−1 S has a canonical representation, Proof. Let k ∈ [κ]. Because both representations are canonical, we have where H k i and G k j are piece-wise hyperplanes wrt. where on both sides above we have a union of hyperplanes on an open set. The claim follows. Definition A.5 (Dependency graph of a pwl. surface). Let S be a piece-wise linear surface on Z, and let S = l∈[κ],i∈[n l] H l i be its canonical representation. We define the dependency graph of S as the directed graph that has the piece-wise hyperplanes H l i l,i as vertices, and has an edge We denote by σ the ReLU function: σ(u) i = max {0, u i} for i ∈ [dim(u)]. Definition A.6 (ReLU network). Let Z ⊆ R d0 with d 0 ≥ 2 be a nonempty open set, and let where For a ReLU network h θ: (We will omit the subscript θ when it is clear from the context.) We write f | A to denote the restriction of the function f to the domain A. (I 1, . . ., I L−1) is called an activation indicator if I l = diag(i l) ∈ R d l ×d l and i l ∈ {0, 1} d l for l ∈ [L − 1]. It is called non-trivial if i l = 0 for all l ∈ [L − 1] and non-trivial up to k if i l = 0 for all l ∈ [k]. and an activation indicator I, we introduce the notation (We will omit the argument θ when it is clear from the context.) These quantities characterise the different linear pieces implemented by the network's units. Also define I θ (z) (I Proof. Left as exercise. Definition A.8 (Fold-set). Let Z be an open set, and f: Z → R a continuous, piece-wise linear function. We define the fold-set of f, denoted by F(f), as the set of all points at which f is nondifferentiable. Definition A.9 (Positive / negative in a neighbourhood). Let Z be an open set. The function f: Z → R is positive (negative) in the neighbourhood of z ∈ Z if for any > 0 there exists z ∈ B (z) such that f (z) > 0 (f (z) < 0). Definition A.10 (Unit fold-set). Let h θ: Z → R be a ReLU network. We define the unit (l, i) fold-set of h θ, denoted F Proof. We will prove that if z satisfies any of the two conditions, then z ∈ F(σ •f), and if it violates both, then z ∈ F(σ • f) c. We begin with the latter implication. Let z be such that f (z) > 0 and z / ∈ F(f), i.e. f is differentiable at z. Since f is piece-wise linear, there exists > 0 such that all of B (z) lies inside a single linear region of f and f (B (z)) ⊆ (0, ∞]. Then, on B (z), the ReLU behaves like an identity, implying σ • f is differentiable at z, proving that z ∈ F(σ • f) c. Next, consider z such that f (z) = 0. For it to violate the second condition, there must exist a ball B (z) around z such that f (B (z)) ⊆ (−∞, 0]. (This is also true if f (z) < 0.) Then, on B (z), the ReLU behaves like a constant zero, implying that σ • f is differentiable at z. We now prove the other implication. If f (z) > 0 and z ∈ F(f), then there exists > 0 such that f (B (z)) ⊆ (0, ∞], which guarantees that the ReLU behaves like an identity on B (z). In this ball, we have σ If f (z) = 0 and f is positive in the neighbourhood of z, we distinguish several cases. If z / ∈ F(f), then there exists a ball B δ (z) on which f behaves linearly, i.e. σ(f (z)) = σ(w z + b), implying z ∈ F(σ • f). If z ∈ F(f) and, in addition, there exists a ball B δ (z) such that f (B δ (z)) ⊆ [0, ∞), then the ReLU behaves like an identity on B δ (z) and z ∈ F(σ • f). The final case is z ∈ F(f) such that f attains both positive and negative values in its neighbourhood. Since f is piece-wise linear, there exist p, n such that f (z + n) < 0 < f (z + p), and Lemma A.9. Let Z be an open set, and let f 1,..., f n: Z → R be continuous, piece-wise linear functions. For any w 1,..., Proof. Left as exercise. Lemma A.10. For all θ except a closed zero-measure set, for all activation indicators I and all k ≤ l. Proof. First, notice that is just a special case of with I l equal to the identity matrix. It therefore suffices to prove. To further simplify, we will prove the statement for a single fixed activation indicator I. Then if Θ(I) is the set of networks for which holds given I, and Θ(I) contains all networks except a closed zero-measure set, then also I Θ(I) contains all networks except a closed zero-measure set, proving the lemma. Let us hence fix I, and let k ∈ [L]. We proceed by induction. For the initial step, notice that the matrix I k W k is just W k with some rows replaced by zeroes. The rank of such a matrix is the same as the matrix obtained by removing the zero rows, which has size (rank(I k), d k−1 ). For all W k except a closed zero-measure set, this matrix has rank min {d k−1, rank(I k)}. For the inductive step, denoteW i:= I i W i · · · I k W k and We assume that rank(W i−1) = r i−1 and want to prove the same for i. Notice that for all W i except a closed zero-measure set, any r i rows of W i are linearly independent and their span intersects with ker(W i−1) only at 0. To see this, recall that by the inductive assumption, rank(W i−1) = r i−1, so ker(W i−1) has dimension d i−1 −r i−1. We can concatenate any r i -subset of rows of W i to the basis of ker(W i−1) to obtain a matrix of size (r i + d i−1 − r i−1, d i−1), which is a wide matrix, because r i ≤ r i−1. Hence, its rows are linearly independent for all W i except a closed zero-measure set. We now prove that rank(I i W iWi−1) = min rank(W i−1), rank(I i) r i. The "≤" direction is immediate. For the "≥" direction, we distinguish between two cases. If rank(I i) ≤ rank(W i−1), let v 1,... v ri be the (linearly independent) nonzero rows of I i W i. We want to show that v jW i−1 j are linearly independent, i.e. that I i W iWi−1 has at least r i linearly independent rows. If ri j=1 λ j v jW i−1 = 0, then ri j=1 λ j v j ∈ ker(W i−1), which by assumption implies λ j v j = 0. By the independence of {v j}, we obtain λ j = 0, i.e. v jW i−1 j are linearly independent, and rank(I i W iWi−1) = r i. If rank(I i) > rank(W i−1), we can reduce the problem to the case rank(I i) ≤ rank(W i−1) by observing that rank(I i W iWi−1) ≥ rank(J i W iWi−1) if J i equals I i only with some 1's replaced by 0's. We can thus take any such J i and apply the argument from the previous paragraph to obtain rank(Lemma A.11. For all θ except a closed zero-measure set, the following holds. Let (l, i), (k, j) be any units, let I be an activation indicator non-trivial up to l − 1, and let J be an activation indicator non-trivial up to k − 1, such that (l, i, I 1:l−1) = (k, j, J 1:k−1). Then, for all scalars c ∈ R, it holds that [w Proof. First, we exclude from consideration all θ = ( for some l, k, i, j, and some I non-trivial up to l − 1. Since for any fixed (l, k, i, j, I), the set of θ satisfying the above is the set of roots of a non-trivial polynomial in θ, it is zero-measure and closed. Because there are only finitely many configurations of (l, k, i, j, I), we have thus excluded a closed zero-measure set of parameters. We will denote its complement Θ *. From now on, we assume θ ∈ Θ *. Notice that the case c = 0 of the lemma is thus automatically satisfied, since w In the following, we can therefore assume c = 0 and treat (l, i, I) and (k, j, J) symmetrically. Denote by Θ ¬ ⊆ Θ * the set of parameters θ for which the lemma does not hold; we need to show that Θ ¬ is closed and zero-measure. We start by showing the latter property by contradiction. Suppose Θ ¬ is positive-measure. We know that for all θ ∈ Θ ¬, there exist triples (l, i, I), (k, j, J) as stated in the lemma, and a scalar c ∈ R such that [w Let C denote the set of all triplet-pairs ((l, i, I), (k, j, J)) satisfying the conditions of the lemma; then the previous statement can be written as Since C is finite, there exist ((l, i, I), (k, j, J)) ∈ C for which the set under the union (call it Θ) is positive-measure. We now consider two cases. If (l, i) = (k, j), then observe that Θ must contain some θ, θ such [w Notice that w Putting everything together, we have that which implies (c − c)v = [0, δ], and in particular w k j (θ, J) = 0. This contradicts the assumption that θ ∈ Θ * and completes the proof for the case (l, i) = (k, j). Definition A.11 (General ReLU network). A ReLU network is general if it satisfies Lemmas A.10, A.11 and A.12. All ReLU networks except a closed zero-measure set are general. Lemma A.13. If h is a general ReLU network, then F(h) follows from Lemma A.9. For the other inclusion, let z ∈ F(ȟ) such that I(z 1 ) =: I and I(z 2 ) =: J are independent of, and ∇ȟ. We consider three cases based on the (non-)triviality of I and J. First, suppose both I and J are trivial up to l − 1. Then by Lemma A.7, and similarly ∇ȟ 1:l−1 k (z 2 ) = 0, which contradicts ∇ȟ Hence, at least one of I, J, must be non-trivial up to l − 1. Second, say both I and J are non-trivial up to l − 1. From ∇ȟ follows that I 1:l−1 = J 1:l−1, we can therefore apply Lemma A.11 to (l, i, I) and (l, i, J). We obtain w Lemma A.14. Let h: Z → R be a ReLU network, and let Proof. We will abbreviate h λ:L | int Z λ−1 as h λ:L. Assume h is general. Then h λ:L clearly satisfies Lemma A.10, and for all (l, i), W l [i, :] = 0. Next, we prove that h λ:L satisfies Lemma A.11. Suppose this was not the case; then there exist units (λ − 1 + l, i), (λ − 1 + k, j), and non-trivial activation indicators I = (I λ, . . ., I λ−1+l), J = (J λ, . . ., I λ−1+k), with (l, i, I) = (k, j, J), and a scalar C ∈ R such that and Then for any non-trivial indicator (I 1, . . ., I λ−1) (J 1, . . ., J λ−1), we obtain by post-multiplying, and for all ι ∈ [λ − 1], The first equality means that w The last condition of generality is Lemma A.12. Suppose h λ:L does not satisfy the lemma. Then there exists a unit (l, i) such that i is positive and negative in the neighbourhood of z, i.e. there exists z ∈ int Z l−1 such that h λ:l i (z) = 0, and for some > 0 either h However, then there exists z ∈ Z such thatȟ 1:l−1 (z) = z, and for z we obtain h 1:l i (z) = 0, and by continuity, there is δ > 0 such that either h This contradicts the fact that h satisfies Lemma A.12. We have thus shown that if h is general, then h λ:L | int Z λ−1 is general. Finally, assume h is transparent, i.e. for all z ∈ Z and l Lemma A.15. a) For all ReLU networks h: In particular, for all general transparent ReLU networks, Proof. We give a proof of b) only. A proof of a) can be obtained by replacing some equalities by inclusions. We will prove by induction that F(h holds; we will prove the same statement for l + 1. By Lemma A.8 and Lemma A.13, we have Since It remains to show the reverse inclusion; we do so by contradiction.) is the partition of the input space into the linear regions of h 1:l+1 i, and Lemma A.15, the function h 1:l+1 i is also linear on the regions of, denote the slope and bias of h 1:l+1 i on P by w(P), b(P). Then The positivity condition guarantees that (w(P), is either an empty set or a piece-wise hyperplane. • the set F(h l:L | intZ l−1) is a pwl. surface whose dependency graph contains d l directed paths of length (L − 1 − l) with distinct starting vertices. We will show that this construction satisfies the lemma. The networks are transparent because of how we define W l 2: for all x ∈ X and l ∈ [L − 1], either h be its canonical representation, and let G denote its dependency graph. To find the required paths in G, we first identify some important vertices. For λ ∈ [L − l], denote This set is nonempty and open because P l+λ is nonempty and open. Next, for any unit (λ, ι), By the definition of W l+λ−1 1 and the fact that h l:l+λ−1 (Z We now show that G contains the edge H Then because of how {P l} l are defined, there existsz ∈ P l such that h l:l+λ−1 (z) =z, so it satisfies It follows thatz ∈ F λ,i). At the same time, the preimage (h l:l+λ−1) −1 (B (z)) is open by continuity, and containsz. So there exists a ball B (z) ⊆ P l such that all z ∈ B (z) satisfy (z) = 0 intersects the center of the half-ball, z. Therefore there exists a sequence of points {z n} ⊆ P l such that z n →z and We obtain thatz ∈ cl(F Proof. Because the representation is canonical, we have which implies H where P runs over the linear regions of h jι−1, but included in the same hyperplane. However, by Lemma A.11, no two piece-wise hyperplanes in S are included in a single hyperplane, so we get a contradiction. Hence, we obtain l 0 < l 1 < · · · < l m ≤ λ, which yields l 0 ≤ λ − m. Let h θ : X → R be a general ReLU network satisfying Lemma A.17, and let h η : X → R be any general ReLU network such that h θ (x) = h η (x) for all x ∈ X. Denote η (W 1, b 1, . . ., W L, b L). Then there exist permutation matrices P 1,... P L−1, and positiveentry diagonal matrices M 1,..., M L−1, such that Published as a conference paper at ICLR 2020 Finally, consider the last layer. We know that h
We prove that there exist ReLU networks whose parameters are almost uniquely determined by the function they implement.
992
scitldr
A general problem that received considerable recent attention is how to perform multiple tasks in the same network, maximizing both efficiency and prediction accuracy. A popular approach consists of a multi-branch architecture on top of a shared backbone, jointly trained on a weighted sum of losses. However, in many cases, the shared representation in non-optimal performance, mainly due to an interference between conflicting gradients of uncorrelated tasks. Recent approaches address this problem by a channel-wise modulation of the feature-maps along the shared backbone, with task specific vectors, manually or dynamically tuned. Taking this approach a step further, we propose a novel architecture which modulate the recognition network channel-wise, as well as spatial-wise, with an efficient top-down image-dependent computation scheme. Our architecture uses no task-specific branches, nor task specific modules. Instead, it uses a top-down modulation network that is shared between all of the tasks. We show the effectiveness of our scheme by achieving on par or better than alternative approaches on both correlated and uncorrelated sets of tasks. We also demonstrate our advantages in terms of model size, the addition of novel tasks and interpretability. Code will be released. The goal of multi-task learning is to improve the learning efficiency and increase the prediction accuracy of multiple tasks learned and performed together in a shared network. Over the years, several types of architectures have been proposed to combine multiple tasks training and evaluation. Most current schemes assume task-specific branches, on top of a shared backbone (Figure 1a) and use a weighted sum of tasks losses, fixed or dynamically tuned, to train them (; ;). Having a shared representation is more efficient from the standpoint of memory and sample complexity and can also be beneficial in cases where the tasks are correlated to each other . However, in many other cases, the shared representation can also in worse performance due to the limited capacity of the shared backbone and interference between conflicting gradients of uncorrelated tasks . The performance of the multi-branch architecture is highly dependent on the relative losses weights and the task correlations, and cannot be easily determined without a "trial and error" phase search . Another type of architecture that has been recently proposed uses task specific modules, integrated along a feed-forward backbone and producing task-specific vectors to modulate the feature-maps along it (Figure 1b). Here, both training and evaluation use a single tasking paradigm: executing one task at a time, rather than getting all the task responses in a single forward pass of the network. A possible disadvantage of using task-specific modules and of using a fixed number of branches, is that it may become difficult to add additional tasks at a later time during the system life-time. Modulation-based architectures have been also proposed by and (Figure 1c). However, all of these works modulate the recognition network channel-wise, using the same modulation vector for all the spatial dimension of the feature-maps. We propose a new type of architecture with no branching, which performs single task at a time but with no task-specific modules (Figure 1d). The core component of our approach is a top-down (TD) (a) (b) (c) (d) Figure 1: (a) Multi branched architecture, task specific branches on a top of a shared backbone, induces capacity and destructive interference problems, force careful tuning. Recently proposed architectures: (b) using tasks specific modules and (c) using channel-wise modulation modules. (d) Our architecture: a top-down image-aware full tensor modulation network with no task specific modules. modulation network, which carries the task information in combination with the image information, obtained from a first bottom-up (BU1) network, and modulates a second bottom-up (BU2) network common for all the tasks. In our approach, the modulation is channel-wise as well as spatial-wise (a full tensor modulation), calculated sequentially along the TD stream. This allows us, for example, to modulate only specific spatial locations of the image depending on the current task, and get interpretability properties by visualizing the activations in the lowest feature-map of the TD stream. In contrast to previous works, our modulation mechanism is also "image-aware" in the sense that information from the image, extracted by the BU1 stream, is accumulated by the TD stream, and affects the modulation process. The main differences between our approach and previous approaches are the following: First, as mentioned, our approach does not use multiple branches or task-specific modules. We can scale the number of tasks with no additional layers. Second, our modulation scheme includes a spatial component, which allows attention to specific locations in the image, as illustrated in figure 2a for the Multi-MNIST tasks . Third, the modulation in our scheme is also image dependent and can modulate regions of the image based on their content rather than location (relevant examples are demonstrated in figures 2b and 2c). We empirically evaluated the proposed approach on three different datasets. First, we demonstrated on par accuracies with the single task baseline on an uncorrelated set of tasks with MultiMNIST while using less parameters. Second, we examined the case of correlated tasks and outperformed all baselines on the CLEVR dataset. Third, we scaled the number of tasks and demonstrated our inherent attention mechanism on the CUB200 dataset. The choice of datasets includes cases where the tasks are uncorrelated (Multi-MNIST) and cases where the tasks are relatively correlated (CLEVR and CUB200). The demonstrate that our proposed scheme can successfully handle both cases and shows distinct advantages over the channel-wise modulation approach. Our work draw ideas from the following research lines: Multiple Task Learning (MTL) Multi task learning has been used in machine learning well before the revival of deep networks . The success of deep neural networks in single task performance (e.g. in classification, detection and segmentation) has renewed the interests of the computer vision community in the field (; ;). Although our primary application area is computer vision, multi task learning has also many application in other fields like natural language processing and even across modalities . We further refer the interested reader to a review that summarizes recent work in the field . Over the years, several types of architectures have been proposed in computer vision to combine the training and evaluation of multiple tasks. First works used several duplications (as many as the tasks) of the base network, with connections between them to pass useful information between the tasks . These works do not share computations and cannot scale with the tasks. More recent architectures, which are in common practice these days, assume task-specific branches on a top of a shared backbone, and use a weighted sum of losses to train them. The joint learning of several tasks has proven to be beneficial in several cases but can also decrease the of some of the tasks due to a limited network capacity, uncorrelated gradients from the different tasks (sometimes called destructive inference) and different learning rates . A naive implementation of multi-task learning requires careful calibration of relative losses of the different tasks. To address these problem several methods have been proposed: "Grad norm" dynamically tunes gradients magnitudes over time, to obtain similar learning rates of the different tasks. uses a joint likelihood formulation to derive task weights based on the intrinsic uncertainty in each task. applies an adaptive weighting of the different tasks, to force a pareto optimal solution to the multi task problem. Along an orthogonal line of research, other works suggested to add task-specific modules to be activated or deactivated during training and evaluation, depending on the task at hand. Liu et al. (2019b) suggests task specific attention networks in parallel to a shared recognition network. suggests adding several types of low-weight task-specific modules (e.g. residual convolutional layers, squeeze and excitation (SE) blocks and batch normalization layers) along the recognition network. Note that the SE block essentially creates a modulation vector, to be channelwise multiplied with a feature-map. Modulation vectors have been further used in for a recognition application, in for continual learning applications and in for a retrieval application and proved to decrease the destructive interference between tasks and the effect of the catastrophic forgetting phenomena. Our design, in contrast, does not use multi-branch architecture, nor task-specific modules. Our network is fully-shared between the different tasks. Compared to , we modulate the feature-maps in the recognition network channel-wise as well as spatial-wise, depending on both the task and the specific image at hand. Neuroscience research provides evidence for a top-down context, feedback and lateral processing in the primate visual pathway (; ; ; ; Piëch et al., 2013;) where top-down signals modulate the neural activity of neurons in lower-order sensory or motor areas based on the current goals. This may involve enhancement of task-relevant representations or suppression for task-irrelevant representations. This mechanism underlies humans ability to focus attention on task-relevant stimuli and ignore irrelevant distractions (; Piëch et al., 2013;). In this work, consistent with this general scheme, we suggest a model that uses top-down modulation in the scope of multi-task learning. Top down modulation networks with feedback, implemented as conv-nets, have been suggested by the computer vision community for some high level tasks (e.g. Figure 3: Several types of modulation modules, the trainable parameters are illustrated in yellow. (d) For simplicity we show only one modulation stage, where X is the input tensor to be modulated and Y is the output tensor. (a) task-dependent vector-modulation architecture , the modulation vectors are switched by the task and explicitly being optimized. (b) Hypothetical extension of (a) to spatial-wise modulation tensors, cannot be done in practice due to the huge number of parameters to optimize. (c) Our approach: We optimize the parameters in the convolutional layers along the top-down network and use the created featuremaps as the modulation tensors. re-classification , keypoints detection , crowd counting , curriculum learning etc.) and here we apply them to multi-task learning applications. We will first describe the task-dependent vector-modulation mechanism, as proposed in , illustrated in figure 3a, and then describe our architecture (figure 3c) in detail. A vector-modulation (a channel-wise modulation) of a given tensor X by a vector z is defined as the product of the elements of the vector z and the corresponding channels of the tensor X. Each element in the output tensor Y is calculated by: where X is the tensor to be modulated and Y is the modulated tensor, both in the form (H × W × C) where H, W, are the spatial dimensions of the tensors and C their channel dimension. The vector z ∈ R C has dimension equal to the number of channels of X, Y. x, y, ch are the column, row, and channel number and indicate a specific element in a tensor. In training the network, the elements of z are considered as parameters and are directly optimized (C additional parameters). In the scope of Multi Task Learning, where several tasks co-exist, the network switches on the fly between several modulation vectors z where K is the number of tasks, see figure 3a for illustration. The network performs one task at a time, and the modulated tensor Y depends on the selected task. This vector-modulation module has been used in separately for every stage in the recognition network (with additional CK parameters in every stage). Two limitations of this module are that it ignores the spatial dimensions of the image, and the lack of information from the image itself. The possible use of the same strategy to explicitly optimize spatial-aware modulation tensors (figure 3b) was discussed in but was deemed infeasible due to the large amount of added parameters (HWCK additional parameters in every stage). Our method addresses both of these issues in an efficient manner and demonstrates better accuracy, showing that spatial-wise modulation and the use of image information are beneficial to many kinds of tasks. A tensor modulation is defined by: Where Z ∈ R H×W ×C is a modulation tensor. To avoid the infeasible computation of directly optimizing Z, we propose the use of created featuremaps as the modulation tensors. Practically, we use a dedicated top-down (TD) convolutional stream, shared between the tasks, to create the modulation featuremaps, and optimize the weights of the convolutional layers instead of directly optimizing the modulation tensors (figure 3c). The number of added parameters in this case depends on the precise architecture of the TD stream but can be approximately estimated by 3 × 3 × C 2 parameters for each convolutional layer (several convolutional layers may be used in one stage). Avoiding the dependency of the number of added parameters on H, W and K allows us to apply the proposed architecture to large images and to scale the number of tasks, as illustrated our experiments. A gated modulation module with a residual connection We further define a gated modulation module with a residual connection as: where the modulation tensor Z is gated with a sigmoid or a tanh function before the multiplication and then added to the input tensor X through a residual connection. The residual gated modulation with Z is equivalent to the modulation withZ = (1 + σ(Z)). Motivated by our ablation studies 4.3.2, unless stated otherwise, we use the gated modulation as defined in Eq. 3 in all of our experiments. For simplicity we denote this operation by the symbol ⊗. An illustration of our network design is shown in figure 1d. In our design, a bottom-up (BU2) recognition network is modulated by a top-down (TD) modulation stream. The inputs to the TD stream are the current task k, and the feature-maps along the first bottom-up stream (BU1, where BU1 and BU2 share the same weights), added to the TD stream via lateral connections. The outputs of the TD stream are its feature-maps, which sequentially modulate the tensors along the recognition network (BU2). Figure 3c illustrate our architecture for one modulation step. Auxiliary losses Our architecture can be naturally decomposed into three sub-networks (BU1, TD, BU2), allowing the structural advantage of adding auxiliary losses at the end of the BU1 or TD streams. This possibility is application-dependent. In the scope of multi-task learning, the TD auxiliary loss might be beneficial because it allows the use of spatial information in a taskdependent manner. This issue is further discussed in section 4.3.4 where we demonstrate the use of a localization loss in the last TD featuremap. Applying the localization loss in train time allows us to obtain an attention map in inference time, which illustrates the relative weights assigned by the network to different locations in the image. We validate our approch on three different datasets: MultiMNIST MultiMNIST is a multi-task learning version of the MNIST dataset in which multiple MNIST images are placed on the same image. We use 2, 3 and 4 classes experiments built as suggested by. Several examples are demonstrated in Figure 2a. In the 2-classes experiment the tasks are: classifying the digit on the top-left (task-LU) and classifying the digit on the bottom-right (task-RL). We corespondently add (task-LL) and (task-RU) for classifying the digits on the bottom-left and top-right on the 3 and 4-classes experiments. The digits are independently chosen and the tasks are considered to be uncorrelated. We use 60K examples and directly apply LeNet as the underlying backbone in our experiments. CLEVR CLEVR is a synthetic dataset, consists of 70K training images and 15K validation images, mainly used as a diagnostic dataset for VQA. The dataset includes images of 3D primitives, with multiple attributes (shape, size, color and material) and a set of corresponding (questionanswer) tuples. We followed the work Liu et al. (2019a), which suggested to use CLEVR not as a VQA dataset, but rather as a referring expression dataset, and further adapt it to a multi-task learning methodology. The tasks in our setup consist of 4 questions ("Are there exactly two cylinders in the image?", "Is there a cube right to a sphere?", "Is there a red sphere?" and "Is the leftmost sphere in the image large?"), arbitrarily chosen, with various compositionary properties. CUB200 is a fine grained recognition dataset that provides 11,788 bird images (equally divided for training and testing) over 200 bird species with 312 binary attribute annotations, most of them referring to the colors of specific birds' parts. In contrast to other work that used all of the 312 attributes as a yes/no question, we re-organized the attributes as a multi-task problem of 12 tasks (for 12 annotated bird's parts) each with 16 classes (the annotated colors + an unknown class) and train using a multi-class cross-entropy loss. To demonstrate our interpretability capability, we further used the parts' location, annotated by a single point to each seen part, as an auxiliary target at the end of the TD stream. architecture We use LeNet, VGG-11 and resnet-18 as our backbone BU architectures for the Multi-MNIST, CLEVR and CUB-200 experiments correspondingly. Each of the backbones has been divided to two parts; a first part that consists mainly of the convolutional layers of the backbone and a second part with the fully connected layers (the classifier). In our architecture, both BU streams consist of the first part of the backbone and share their weights. The TD stream, unless specified otherwise, is a replica of the BU stream, in terms of layers structure and number of channels, combined with bilinear upsampling layers. The classifier is only attached to the BU2 stream. Information is passed between the BU1, TD and BU2 streams using lateral connections implemented as 1x1 convolutions. A task embedding layer (a fully connected layer) is added on the top of the TD stream. See an illustration of the full scheme in figure 1d and a detailed architecture description of the Multi-MNIST experiments in the supplementary materials. baselines We compare our method both to a "single task" approach, where each task is independently solved and to a "uniform scaling" approach, where a uniformly weighted sum of the individual losses is being minimized. We have also compared our architecture to "ch-mod", a channelwised vector modulation architecture and to a MOO (multi objective optimization approach) where the weights of loss items are dynamically tuned as suggested by. We use the Multi-MNIST dataset to demonstrate our performance in case of uncorrelated tasks for 2, 3, and 4 tasks recognition problems with no additional hardware. All models trained using a standard LeNet architecture. We used a batch size of 512 images trained on 1 GPU with learning rate of 1e −3 using the Adam optimizer. Training curves are presented in figure []. Figure 4b visualize the performance profile of the 2-classes experiment as a scatter plot of accuracies on task-LU and task-RL for the single task approach (vertical and horizontal lines correspondingly) and the multi-branched approach for several manually tuned loss weights (the blue dots). The scatter plot demonstrate a capacity problem, where better accuracies (above a certain limit) in one task cannot be achieved without being reflected as lower accuracies on another task. Our are marked as a red star, showing better accuracies than the single-task case with much less parameters. Table 1 summarizes our on the Multi-MNIST experiment while sequentially enlarging the number of tasks. We show mean ±std based on 5 experiments for each row. Our method achieves better than the single-task baseline while using much less parameters (the third column shows the number of parameters as a multiplier of the number of parameter in a standard Lenet architecture). Other approaches, including the channel-wise modulation approach, achieve lower accuracy rates. Scaling the number of tasks keeps the accuracy gap almost without additional parameters. We further conducted ablation studies on Multi-MNIST, to examine several aspects of our proposed architecture. Table 2 shows the ablation , analyzed as follows: Using spatial-wise and image-aware modulation modules. Our experiments show that extending the existing channel-wise modulation architecture to an image-aware spatial-wise modulation architecture improves the . Table 2a quantify the improvement in the compared to the channel-wise modulation baseline (, first row in the table). We show mean ± std based on 5 repetitions of the full training pipeline for each row. Using a channel-wise image-aware modulation architecture by sequentially integrating information from the featuremaps in BU1 (second row) improves the accuracies by ∼ 0.4%. Using a spatial-wise modulation without using the information from BU1 stream (third row) improve the accuracies by ∼ 2.7%. Our approach, that uses both image-aware and spatial-wise modulation, improves the accuracies by a solid gap of ∼ 3.3%. Number of channels in the TD stream. Table 2b compares the accuracies of our proposed architecture (first line, where the TD stream is a replica of the BU stream which has 1, 10 and 20 channels in its feature-maps) with cheaper architectures which use a reduced number of channels along the layers in the TD stream. Our experiments show a trend line (the accuracies decrease when the number of channels in the TD stream decreases) and that optimizing the number of channels along the TD stream in terms of efficiency-accuracies tradeoff can be done (demonstrated by the second row in the table where higher accuracy achieved while using less parameters). Connectivity type. Our architecture uses two sets of lateral connections; the first set passes information from the BU1 stream to the TD stream, and the second passes information from the TD stream to the BU2 stream. Table 2c compares the accuracy of our proposed architecture when using different connectivity types to the TD stream (first column) and to the BU2 stream (second column). Here + is an addition connectivity, × is a multiplication connectivity and ⊗ is a gated modulation with residual connection as described in Equation 3. The table shows higher accuracy when using addition connectivity along the TD stream and a small preference for the gated mod- Table 3: Performance on CLEVR, higher is better. Our approach yields better accuracies also on correlated set of tasks with no additional hardware as tasks are added. Better accuracies are demonstrated both compared to the single task and uniform scaling approaches while using less parameters. ulation connectivity over the multiplication connectivity along the BU2 stream. To better compare between the two connectivity types we carried 5 experiments and report mean ± std. We used the gated modulation connectivity type in all our experiments due to its slightly higher . Auxiliary losses. Our architecture, although usually uses only one classification loss at the end of the BU2 stream, can be easily adapted to integrate two auxiliary losses, one at the end of the BU1 stream (same classification loss) and the other on the image plane at the end of the TD stream (segmentation loss). Table 2d shows no additional improvement when using these auxiliary losses on the Multi-MNIST experiment. Note that a TD auxiliary segmentation loss (here, a binary cross entropy loss between the predicted digit and a zero-one map of the target digit) can also be used to add interpretability to our scheme. Examples are shown in the CUB200 experiment, section 4.3.4. We used the CLEVR dataset to show our performance in case of correlated tasks (the questions on CLEVR are correlated) and to demonstrate our ability to enlarge the number of tasks with no extra hardware while keeping the targets accuracies. Our are summarized in Table 3. We trained all models using a VGG-11 architecture but decreased the number of channels in the output of the last convolutional layer from 512 to 128 to allow training with larger batch size. A detailed analysis of the number of parameters can be found in the supplementary materials. We used a batch size of 128 images trained on 2 GPUs with learning rate of 1e −4 using the Adam optimizer. Table 3 shows that our are better than both single task and uniform scaling approach while using much less parameters (the third columns shows the number of parameters of each architecture as a multiplier of the number of parameters in a single task VGG-11 backbone). Here, the channelwise modulation approach uses the smallest number of parameters but also gets the worst . The table also shows that scaling the number of tasks (with no additional hardware) is not only feasible but also may improve the of each task separately. We further note that we used a TD layers that are a replica of the VGG-11 BU layers. Further reducing the number of parameters by decreasing the channel dimensions in the TD stream can be easily done but is not our main scope in this work. We used the CUB-200 dataset to further demonstrate our performance on correlated tasks in realworld images, scaling the number of tasks and using another type of backbone architecture (a Resnet backbone). In contrast to previous experiments, we did not aim at reducing the number of parameters (since we are using a Resnet backbone); rather we demonstrate better performance, and our built-in capability to visualize the attention maps at the end of the TD stream. We trained all models using a Resnet-18 architecture. We used a batch size of 128 images trained on 2 GPUs with learning rate of 1e −4 using the Adam optimizer for 200 epochs. While training our architecture we add an auxiliary loss at the end of the TD stream. The target in this case is a one-hot 224x224 mask, where only a single pixel is labeled as foreground, blurred by a Gaussian kernel with a standard deviation of 3 pixels. Training one task at a time, we minimize the cross-entropy loss over the 224x224 image at the end of the TD softmax output (which encourages a small detected area) for each visible ground-truth annotated task/part. For a fair comparison, we also compared our to the channel-wised modulation architecture trained with the same localization auxiliary loss (on the coarse map at the end of the BU2 stream, fifth line in the table). Figures 4c and 4d demonstrate the attention maps produced by our architecture in inference time. Figure 4c is an example where the predicted mask is well localized on the crown of the bird (the task) and the color is correctly predicted. Figure 4d demonstrate an error case where the breast of the bird is not well localized by the mask and as a consequence the color is wrongly predicted. More examples of interest are shown in the supplementary materials. Our quantitative are summarized in Table 4. The show better accuracy of our scheme compared to all baselines. We specifically show better accuracy compared to the channel-wise modulation scheme, indicating the preference of our image-dependent spatial-wise modulation process on the CUB200 database. We proposed a novel architecture for multi-task learning using a top-down modulation network. Compared with current approaches, our scheme does not use task-dependent branches or taskdependent modules, and the modulation process is executed spatial-wise as well as channel-wise, guided by the task and by the information from the image itself. We tested our network on three different datasets, achieving on par or better accuracies on both correlated and uncorrelated sets of tasks. We have also demonstrated inherent advantages of our scheme: adding tasks with no extra hardware that in a decrease in the total number of parameters while scaling the number of tasks, and allowing interpretability by pointing to relevant image locations. More generally, multiple-task learning algorithms are likely to become increasingly relevant, since general vision systems need to deal with a broad range of tasks, and executing them efficiently in a single network is still an open problem. In future work we plan to adapt our described architecture to a wider range of applications (e.g. segmentation, images generation) and examine possible combinations of approaches such as combining partial-branching strategy with our TD approach. We also plan to study additional aspects of multi-task learning such as scaling the number of tasks and tackling the catastrophic forgetting problem. For the MultiMNIST experiments, we use an architecture based on LeNet ). We followed and use two 5x5 convolutional layers and one fully-connected layer as the shared backbone and two other fully-connected layers as task specific branches for the multi-branched architecture (See figure 5a for details). Our architecture is illustrated in figure 5b. We use the shared backbone and a single branch as the recognition network (BU2). On the TD stream we use an embedding layer followed by two 5x5 convolutional layers. BU1 share the same weights with BU2. the three subnetworks are combined together using lateral connections, implemented as 1x1 convolutions. The networks for the CLEVR and CUB200 experiments where similarly implemented using VGG-11 and ResNet-18 backbones correspondently. The exact number of parameters used by the Multi-MNIST and by the CLEVR architectures are summarized in table 5. To demonstrate our interpretability capabilities we trained our proposed network with an auxiliary localization cross entropy loss on the last layer of the TD stream (details in section 4.3.4). Here we present several more examples of interest we did not include in the main text. More qualitative examples to demonstrate our ability to identify the relevant regions that most affected the network prediction. In all of these images the target part (the task, shown in the upper part of each image), is precisely localized and the prediction (shown in the lower part of each image) follows the ground truth. Best viewed in color while zoomed in. Figure 7: Error cases. Left images demonstrate good examples, counted as failure cases due to annotations errors. Our network successfully localize the asked part and correctly predict its color. Right images demonstrate bad localization examples. Ground truth classes were still predicted, with a very high score, maybe due to the correlated nature of the tasks. Best viewed in color while zoomed in.
We propose a top-down modulation network for multi-task learning applications with several advantages over current schemes.
993
scitldr
Clustering algorithms have wide applications and play an important role in data analysis fields including time series data analysis. The performance of a clustering algorithm depends on the features extracted from the data. However, in time series analysis, there has been a problem that the conventional methods based on the signal shape are unstable for phase shift, amplitude and signal length variations. In this paper, we propose a new clustering algorithm focused on the dynamical system aspect of the signal using recurrent neural network and variational Bayes method. Our experiments show that our proposed algorithm has a robustness against above variations and boost the classification performance. The rapid progress of IoT technology has brought huge data in wide fields such as traffic, industries, medical research and so on. Most of these data are gathered continuously and accumulated as time series data, and the extraction of features from a time series have been studied intensively in recent years. The difficulty of time series analysis is the variation of the signal in time which gives rise to phase shift, compress/stretch and length variation. Many methods have been proposed to solve these problems. Dynamic Time Warping (DTW) was designed to measure the distance between warping signals . This method solved the compress/stretch problem by applying a dynamic planning method. Fourier transfer or wavelet transfer can extract the features based on the frequency components of signals. The phase shift independent features are obtained by calculating the power spectrum of the transform . In recent years, the recurrent neural network (RNN), which has recursive neural network structure, has been widely used in time series analysis (; 1991). This recursive network structure makes it possible to retain the past information of time series. Furthermore, this architecture enables us to apply this algorithm to signals with different lengths. Although the methods mentioned above are effective solutions for the compress/stretch, phase shift and signal length variation issues respectively, little has been studied about these problems comprehensively. Let us turn our attention to feature extraction again. Unsupervised learning using a neural network architecture autoencoder (AE) has been studied as a feature extraction method (; ;). AE using RNN structure (RNN-AE) has also been proposed and it has been applied to real data such as driving data and others. RNN-AE can be also interpreted as the discrete dynamical system: chaotic behavior and the deterrent method have been studied from this point of view (; Laurent & von). In this paper, we propose a new clustering algorithm for feature extraction focused on the dynamical system aspect of RNN-AE. In order to achieve this, we employed a multi-decoder autoencoder with multiple decoders to describe different dynamical systems. We also applied the variational Bayes method (; ;) as the clustering algorithm. This paper is composed as follows: in Section 4, we explain AE from a dynamical system view, then we define our model and from this, derive its learning algorithm. In Section 5, we describe the application of our algorithm to an actual time series to show its robustness, including running two experiments using periodic data and driving data. Finally we summarize our study and describe our future work in Section 7. A lot of excellent clustering/representation algorithms of data using AE have been studied so far . integrated the distance between data and centroids into an objective function to obtain a cluster structure in the encoded data space. proposed a generative model based on the variational autoencoder (VAE) with a clustering structure as a prior distribution. achieved a high separability clustering by adding a regularization term for the orthogonality and balanced clusters of the encoded data. These, however, are regularization methods of the objective function, and focused on only the distribution of the encoded data. They did not give the clustering policy based on the decoder structure, namely, the reconstruction process of the data. From dynamical system point of view, one decoder of RNN-AE corresponds to a single dynamics in the space of latent representation. Hence, it is natural to equip RNN-AE with multiple decoders to implement multiple dynamics. Such an extension of RNN-AE, however, has yet to be proposed in related works to the best of our knowledge. 3 RECURRENT NEURAL NETWORK AND DYNAMICAL SYSTEM 3.1 RECURRENT NEURAL NETWORK USING UNITARY MATRIX RNN is a neural network designed for time series data. The architecture of the main unit is called cell, and mathematical expressions are shown in Fig. 1 and Eq.. Suppose we are given a time series, where D denotes data dimension. RNN, unlike the usual feed-forward neural network, operates the same transform matrix to the hidden valuable recursively, where σ(·) is an activation function and z t, h t, b ∈ R L. This recursive architecture makes it possible to handle signals with different lengths, although it is vulnerable to the vanishing gradient problem as with the deep neural network (DNN) (; 1991). Long short-term memory (LSTM) and gated recurrent unit (GRU) are widely known solutions to this problem (; ;). These methods have the extra mechanism called a gate structure to control output scaling and retaining/forgetting of the signal information. Though this mechanism works effectively in many application fields , the architecture of network is relatively complicated. As an alternative simpler method to solve this problem, the algorithm using a unitary matrix as the transfer matrix V was proposed in recent years. Since the unitary matrix does not change the norm of the variable vector, we avoid the vanishing gradient problem. In addition, the network architecture remains unchanged from the original RNN. In this paper, we focus on the dynamical system aspect of the original RNN. We employ the unitary matrix type RNN to take advantage of this dynamical system structure. However, to implement the above method, we need to find the transform matrix V in the space of unitary matrices is the set of complex-valued general linear matrices with size L × L and * means the adjoint matrix. Several methods to find the transform matrix from the U has been reported so far (; ; ; ;). Here, we adopt the method proposed by. The architecture of AE using RNN is shown in Fig. 2. AE is composed of an encoder unit and a decoder unit. The parameters where X is the input data and X dec is the decoded data. The input data is recovered from only the encoded signal h using the matrix (V dec, U dec), therefore h is considered as the essential information of the input signal. When focusing on the transformation Figure 2: Architecture of RNN Autoencoder of the hidden variable, this recursive operation has the same structure of a discrete dynamical system expression as described in the following equation: where f is given by Eq.. From this point of view, we can understand that RNN describes the universal dynamical system structure which is common to the all input signals. In this section, we will give the architecture of the Multi-Decoder RNN AE (MDRA) and its learning algorithm. As we discussed in the previous section, RNN can extract the dynamical system characteristics of the time series. In the case of the original RNN, the model expresses just one dynamical system, hence all input data are recovered from the encoded h by the same recovery rule. Therefore h is usually used as the feature value of the input data. In contrast, in this paper, we focus on the transformation rule itself. For this purpose, we propose MDRA which has multiple decoders to extract various dynamical system features. The architecture of MDRA is shown in Fig. 3. Let us put We will derive the learning algorithm to optimize the whole set of parameters W in the following section. We applied a clustering method to derive the learning algorithm of MDRA. Many clustering algorithms have been proposed: here we employ the variational Bayes (VB) method, because the VB method enabled us to adjust the number of clusters by tuning the hyperparameters of a prior distribution. We first define free energy, which is negative marginal log-likelihood, by the following equation, where X is data tensor defined in Section 3 and W is parameter tensor of MDRA defined above. is the set of latent variables each of which means an allocation for a decoder. That is, y n = (y n1, · · ·, y nK) T ∈ R K, where y nk = 1 if X n is allocated to the k-th decoder is the probability density function representation of MDRA parametrized by tensor W, p(α) and p(β) are its prior distributions for a probability vector α = (α 1, · · ·, α K) and a precision parameter β > 0. We applied the Gaussian mixture model as our probabilistic model. Hence p(α) and p(β) were given by Dirichlet and gamma distributions respectively which are the conjugate prior distributions of multinomial and Gaussian distributions. These specific distributions are given as follows: Here, θ 0 > 0, ν 0 > 0 and λ 0 > 0 are hyperparameters and g(h n |W k dec) = X n dec,k denotes decoder mapping of RNN from the encoded n-th data h n, H = (h 1, · · ·, h N) and T n D is the total signal dimension of input signal X n including dimension of input data. To apply the variational Bayes algorithm, we the derive the upper bound of the free energy by applying Jensen's inequality, where D KL (·∥·) is the Kullback−Leibler divergence. The upper boundF (X|W) is called the variational free energy or (negated) evidence lower bound (ELBO). The variational free energy is minimized using the variational Bayes method under the fixed parameters W. Furthermore, it is also minimized with respect to the parameters W by applying the RNN learning algorithm to the second term ofF (X|W), In this section, we derive the variational Bayes algorithm for MDRA to minimize the variational free energy. We show the outline of the derivation below (for a detailed derivation, see Appendix A). The general formula of the variational Bayes algorithm is given by. By applying the above equations to the above probabilistic models, we obtained the specific algorithm shown in Appendix A.1. Then we minimize the following term using RNN algorithm: From the above discussion, we finally obtained the following MDRA algorithm. Fig. 4 Calculate R = (r nk) by the algorithm VB part of MDRA (Algorithm 2). until the difference of variational free energyF (X|W) < Threshold We first examined the basic performance of our algorithm using periodic signals. Periodic signals are typical time series signals expressed by dynamical systems. Input signals have 2, 4, and 8 periods respectively in 64 steps. Each signal is added a phase shift (maximum one period), amplitude variation (from 50% to 100% of the maximum amplitude), additional noise (maximum 2% of maximum amplitude) and signal length variation (maximum 80% of the maximum signal length). Examples of input data are illustrated in Fig. 5. We compared RNN-AE to MDRA on its feature extraction performance using the above periodic signals. Fig. 6 and Fig. 7 show the of RNN-AE and MDRA respectively. The parameter Table 3 in Appendix B. We used multi-dimensional scaling (MDS) as the dimension reduction method to visualize the distributions of features in Fig. 6 and Fig. 7. Fig. 6 shows the distribution of the encoded data h n which is the initial value of the decoder unit in Fig. 2. We found that RNN-AE can separate the input data into three regions corresponding to each frequency. However each frequency region is distributed widely, therefore some part of the region overlap each other. Fig.7 shows the distributions of the encoded data h n and the clustering allocation weight r n extracted by MDRA. The distribution of r n, shown in the right figure of Fig. 7, is completely separated into each frequency component without overlap. This shows that the distribution of r n as the feature extraction has robustness for a phase shift, amplitude and signal length variation. We also show that MDRA can boost the classification accuracy using an actual driving data in the next section. We applied our algorithm to a driver identification problem. We use the data consisting of 3 drivers signals when driving, including speed, acceleration, braking and steering angle signals. 1 The input signal was about 10 seconds differential data (128 steps), which was cut out from the original data by a sliding window. 2 The detailed information of the input data is shown in Table 1. We also show samples of data (difference of acceleration) in Fig. 10 in Appendix C. The feature extraction by RNN-AE and MDRA are shown in Fig. 8. The parameter setting of this experiment is listed in Table 4 2 We use only the data of which the maximum acceleration difference is more than a certain threshold. and MDRA respectively, the right figure is the distribution of r n of MDRA. We can find different trends in the distributions of the latent variable h n and r n of MDRA. The distribution of r n spreads wider than that of h n. Table 2 shows the accuracy of the driver identification using the above We verified the feature extraction peformance of the MDRA using actual time series data. In Section 5.1, we saw that the periodic signals are completely classified by the frequency using clustering weight r n. In this experiment, the average clustering weights, the elements of are (3.31e-01, 8.31e-47, 8.31e-47, 3.46e-01, 8.31e-47, 3.19e-01, 8.31e-47), with only three components having effective weights. This weight narrowing-down is one of the advantages of VB learning. The left of Fig. 9 shows an enlarged view of around "freq 4" in Fig. 7 (right). We found that the distribution of "freq 4" is in fact spread linearly. The right of Fig. 9 is the ). We found that each frequency data formed several spreading clusters without overlapping. As we saw earlier, the distribution of r n has a spreading distribution compared to that of h n. We inferred that the spreading of the distribution r n was caused by extracting the diversity on the driving scenes. In addition, the identification shows that the combination of the features given by r n and h n can improve the performance. , which studied a driver identify algorithm using the AE, proposed the minimization of the error integrating the reconstruction error of AE and the classification error of deep neural network. This algorithm can avoid the over-fitting by using unlabeled data whose data collection cost is smaller than labeled data. From these , we can expect that the MDRA can contribute to not only boost identification performance but also restrain the over-fitting. In this paper, we proposed a new algorithm MDRA that can extract dynamical system features of a time series data. We conducted experiments using periodic signals and actual driving data to verify the advantages of MDRA. The show that our algorithm not only has robustness for the phase shift, amplitude and signal length variation, but also can boost classification performance. The phase transition phenomenon of the variational Bayes learning method, depending on the hyperparameters, has been reported in The hyperparameter setting of the prior distribution has a great effect on the clustering and classification performance. We intend to undertake a detailed study of the relation between the feature extraction performance and hyperparameter setting of the prior distributions in the future. In addition, where T n D means total signal dimension. Therefore, we obtain We here put Hence Next we calculate log q(α, β), Above equation can be divided into the two terms including α and β respectively, E q(yn) [y nk] = 1 · q(y nk = 1) + 0 · q(y nk = 0) = q(y nk = 1) = r nk to the above equation, we obtain On the other hand, Similarly, by applying E q(yn) [y nk] = r nk, we obtain log q(β) We finally calculate log ρ nk in Eq..We first calculate log q(α) and log q(β), Gamma(β|ν,λ). By using the expectations of β and log β by gamma distribution Similarly, q(α) turns out to be the Dirichlet distribution with parameters (θ 1, · · ·,θ K), and is calculated by the same way in the general mixture model. Therefore we finally obtain From the above , the following variational Bayes algorithm is derived. A where we used r nk = E q(yn) [y nk]. We achieve this by applying RNN algorithm. From the above discussion including Appendix A.1, we obtain the MDRA algorithm. By putting β = e x, we obtain x = log β, dβ = e x dx, In this section, we show the parameter setting of the experiments in Section 5. Here L is the dimension of hidden variable h, capacity, fft and cpx are parameters of EUNN , K is the number of the decoders, θ 0, ν 0, λ 0 are hyperparameters of prior distributions. We applied our algorithm to the driving data in the Section 5.2. We used the differential signals of speed, acceleration, braking and steering angle signals in the experiment. Fig. 10 shows examples of acceleration signals.
Novel time series data clustring algorithm based on dynamical system features.
994
scitldr
Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by regularizing the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. Previous work has taken important steps to connect these topics through various forms of gradient regularization. We find, however, that existing methods that use attributions to align a model's behavior with human intuition are ineffective. We develop an efficient and theoretically grounded feature attribution method, expected gradients, and a novel framework, attribution priors, to enforce prior expectations about a model's behavior during training. We demonstrate that attribution priors are broadly applicable by instantiating them on three different types of data: image data, gene expression data, and health care data. Our experiments show that models trained with attribution priors are more intuitive and achieve better generalization performance than both equivalent baselines and existing methods to regularize model behavior. Recent work on interpreting machine learning models has focused on feature attribution methods. Given an input feature, a model, and a prediction on a particular sample, such methods assign a number to the input feature that represents how important the input feature was for making the prediction. Previous literature about such methods has focused on the axioms they should satisfy (; ; Štrumbelj and ;), and how attribution methods can give us insight into model behavior (a; ;). These methods can be an effective way of revealing problems in a model or a dataset. For example, a model may place too much importance on undesirable features, rely on many features when sparsity is desired, or be sensitive to high frequency noise. In such cases, we often have a prior belief about how a model should treat input features, but for neural networks it can be difficult to mathematically encode this prior in terms of the original model parameters. Ross et al. (2017b) introduce the idea of regularizing explanations to train models that better agree with domain knowledge. Given a binary variable indicating whether each feature should or should not be important for predicting on each sample in the dataset, their method penalizes the gradients of unimportant features. However, two drawbacks limit the method's applicability to real-world problems. First, gradients don't satisfy the theoretical guarantees that modern feature attribution methods do . Second, it is often difficult to specify which features should be important in a binary manner. More recent work has stressed that incorporating intuitive, human priors will be necessary for developing robust and interpretable models . Still, it remains challenging to encode meaningful, human priors like "have smoother attribution maps" or "treat this group of features similarly" by penalizing the gradients or parameters of a model. In this work, we propose an expanded framework for encoding abstract priors, called attribution priors, in which we directly regularize differentiable functions of a model's axiomatic feature attributions during training. This framework, which can be seen as a generalization of gradient-based regularization (; b; ; ;), can be used to encode meaningful domain knowledge more effectively than existing methods. Furthermore, we introduce a novel feature attribution method -expected gradientswhich extends integrated gradients , is naturally suited to being regularized under an attribution prior, and avoids hyperparameter choices required by previous methods. Using attribution priors, we build improved deep models for three different prediction tasks. On images, we use our framework to train a deep model that is more interpretable and generalizes better to noisy data by encouraging the model to have piecewise smooth attribution maps over pixels. On gene expression data, we show how to both reduce prediction error and better capture biological signal by encouraging similarity among gene expression features using a graph prior. Finally, on a patient mortality prediction task, we develop a sparser model and improve performance when learning from limited training data by encouraging a skewed distribution of the feature attributions. In this section, we formally define an attribution prior, and give three example priors for different data types. Let X ∈ R n×p denote a dataset with labels y ∈ R o, where n is the number of samples, p is the number of features, and o is the number of outputs. In standard deep learning we aim to find optimal parameters θ by minimizing loss, subject to a regularization term Ω (θ) on the parameters: θ = argmin θ L(θ; X, y) + λ Ω (θ). For some model parameters θ, let Φ(θ, X) be a feature attribution method, which is a function of θ and the data X. Let φ i be the feature importance of feature i in sample. We formally define an attribution prior as a scalar-valued penalty function of the feature attributions, Ω(Φ(θ, X)), which represents a log-transformed prior probability distribution over possible attributions. θ = argmin θ L(θ; X, y) + λΩ(Φ(θ, X)), where λ is the regularization strength. We note that the attribution prior function Ω is agnostic to the attribution method Φ. While in Section 3 we propose a feature attribution method for attribution priors, other attribution methods can be used. This includes existing methods like integrated gradients or simply the gradients themselves. In the latter case, we can see the method proposed in Ross et al. whose, ith entry is the gradient of the loss at the th sample with respect to the ith feature. A is a binary matrix indicating which features should be penalized in which samples. Often, however, we do not know which features are important in advance. Instead, we can define different attribution priors for different tasks depending on the data and our domain knowledge. To demonstrate how attribution priors can capture human intuition in a variety of domains, in the following sections we first define and then apply three different priors for three different data types. Prior work on interpreting image models has focused on creating pixel attribution maps, which assign a value to each pixel indicating how important that pixel was for a model's prediction . These attribution maps can be noisy and often highlight seemingly unimportant pixels in the . Such attributions can be difficult to understand, and may indicate the model is vulnerable to adversarial attacks . Although we may desire a model with smoother attributions, existing methods only post-process attribution maps and do not change model behavior (; ;). Such techniques may not be faithful to the original model . In this section, we describe how to apply our framework to train image models with naturally smoother attributions. To regularize pixel-level attributions, we use the following intuition: neighboring pixels should have a similar impact on an image model's output. To encode this intuition, we apply a total variation loss on pixel-level attributions as follows: where φ i,j is the attribution for the i, j-th pixel in the -th training image. Including the λ scale factor, this penalty is equivalent to placing a Laplace(0, λ −1) prior on the differences between adjacent pixel attributions. For further details, see and the Appendix. In the image domain, our attribution prior took the form of a penalty encouraging smoothness over adjacent pixels. In other domains, we may have prior information about specific relationships between features that can be encoded as an arbitrary graph (such as social networks, knowledge graphs, or protein-protein interactions). For example, prior work in bioinformatics has shown that proteinprotein interaction networks contain valuable information that can be used to improve performance on biological prediction tasks . These networks can be represented as a weighted, undirected graph. Formally, say we have a weighted adjacency matrix W ∈ R p×p + for an undirected graph, where the entries encode our prior belief about the pairwise similarity of the importances between two features. For a biological network, W i,j encodes either the probability or strength of interaction between the i-th and j-th genes (or proteins). We can encourage similarity along graph edges by penalizing the squared Euclidean distance between each pair of feature attributions in proportion to how similar we believe them to be. Using the graph Laplacian (L G = D − W), where D is the diagonal degree matrix of the weighted graph this becomes: In this case, we choose to penalize global rather than local feature attributions. So we defineφ i to be the importance of feature i across all samples in our data set, where this global attribution is calculated as the average magnitude of the feature attribution across all samples:, Ω graph is equivalent to placing a Normal(0, λ −1) prior on the differences between attributions for features that are adjacent in the graph. and the Appendix for details. Feature selection and sparsity are popular ways to alleviate the curse of dimensionality, facilitate interpretability, and improve generalization by building models that use a small number of input features. A straightforward way to build a sparse deep model is to apply an L1 penalty to the first layer (and possibly subsequent layers) of the network. Similarly, the sparse group lasso (SGL) penalizes all weights connected to a given feature , while Ross et al. (2017a) penalize the gradients of each feature in the model. These approaches suffer from two problems: First, a feature with small gradients or first-layer weights may still strongly affect the model's output . A feature whose attribution value (e.g., integrated or expected gradient) is zero, is much less likely to have any effect on predictions. Second, successfully minimizing the L1 or SGL penalty is not necessarily the best way to create a sparse model. A model that puts weight w on 1 feature is penalized more than one that puts weight w 2p on each of p features. Prior work on sparse linear regression has shown that the Gini coefficient G of the weights, proportional to 0.5 minus the area under the CDF of sorted values, avoids such problems and corresponds more directly to a sparse model . We extend this analysis to deep models by noting that the Gini coefficient can be written differentiably and using it to develop an attribution penalty based on the global feature attributionsφ i: This is similar to the total variation penalty Ω image, but normalized and with a flipped sign to encourage differences. The corresponding attribution prior is maximized when global attributions are zero for all but one feature, and minimized when attributions are uniform across features. Here we propose a feature attribution method called expected gradients and describe why it is a natural choice for attribution priors. Expected gradients is an extension of integrated gradients with fewer hyperparameter choices. Like several other attribution methods, integrated gradients aims to explain the difference between a model's current prediction and the prediction that the model would make when given a baseline input. This baseline input is meant to represent some uninformative reference input, which represents not knowing the value of the input features. Although choosing such an input is necessary for several feature attribution methods (; ;), the choice is often made arbitrarily. For example, in image tasks, the image of all zeros is often chosen as a baseline, but doing so implies that black pixels will not be highlighted as important by existing feature attribution methods. In many domains, it is not clear how to choose a baseline that correctly represents a lack of information. Our method avoids an arbitrary choice of baseline by modeling not knowing the value of a feature by integrating over a dataset. For a model f, the integrated gradients value for feature i is defined as: where x is the target input and x is baseline input. To avoid specifying x, we define the expected gradients value for feature i as: where D is the underlying data distribution. Since expected gradients is also a diagonal path method, it satisfies the same axioms as integrated gradients . Directly integrating over the training distribution is intractable; so we instead reformulate the integrals as expectations: This expectation-based formulation lends itself to a natural sampling based approximation method: draw samples of x from the training dataset and α from U, compute the value inside the expectation for each sample, and average over samples. Training with expected gradients: If we let the attribution function Φ in our attribution prior Ω(Φ(θ, X)) be expected gradients, a good approximation during training appears to require computing an expensive Monte Carlo estimate with hundreds of extra gradient calls every training step. Ordinarily, this would make training with such attributions intractable. However, most deep learning models today are trained using some variant of batch gradient descent, in which the gradient of a loss function is approximated over many training steps using mini-batches of data. We can use a batch training procedure to approximate expected gradients over the training procedure as well. During training, we let k be the number of samples we draw to compute expected gradients for each mini-batch of data. Remarkably, we find that as small as k = 1 suffices to regularize the explanations because of the averaging effect of the expectation formulation over many training samples. This choice of k leads to every sample in the training set being used as a reference over the course of an epoch with only one additional gradient call per training step. This in far more reference samples than the 100-200 we found necessary for reliable individual attributions (see Appendix). We first evaluate expected gradients by comparing it with other feature attribution methods on 18 benchmarks introduced in (Table 1). These benchmark metrics aim to evaluate how well each attribution method finds the most important features for a given dataset and model. For all metrics, a larger number corresponds to a better feature attribution method. Expected gradients significantly outperforms the next best feature attribution method (p = 7.2 × 10 −5, one-tailed Binomial test). We provide more details and also additional benchmarks in the Appendix. We apply our Ω pixel attribution prior to the CIFAR-10 dataset . We train a VGG16 network from scratch , and optimize hyperparameters for the baseline model without an attribution prior. To choose λ, we search over values in [10 −20, 10 −1], and choose the λ that minimizes the attribution prior penalty and achieves a test accuracy within 10% of the baseline model. Figure 1 displays expected gradients attribution maps for both the baseline and the model regularized with an attribution prior on 5 randomly selected test images. In all examples, the attribution prior in a model with visually smoother attributions. Remarkably, smoother attributions also often better highlight the structure of the target object in the image in many instances. Recent work in understanding image classifiers has suggested that they are brittle to small domain shifts: small changes in the underlying distribution of the training and test set can in significant drops in test accuracy . To simulate a domain shift, we apply Gaussian noise to images in the test set and re-evaluate the performance of the regularized model and the baseline model. As an adaptation of Ross et al. (2017b), we also compare to regularizing the total variation of gradients with the same criteria for choosing λ. For each method, we train 5 models with different random initializations. In Figure 1, we plot the mean and standard deviation of test accuracy as a function of standard deviation of added Gaussian noise. The figure shows that our regularized model is more robust to noise than both the baseline and the gradient-based model. Although our method provides both robustness and more intuitive saliency maps, this comes at the cost of reduced test set accuracy (0.93 ± 0.002 for the baseline vs. 0.85 ± 0.003 for pixel attribution prior model). The trade-off between robustness and accuracy that we observe is in line with previous work that suggests image classifiers trained solely to maximize test accuracy rely on features that are brittle and difficult to interpret (; ;). Despite this trade-off, we find that at a stricter hyperparameter cutoff for λ -within 1% test accuracy of the baseline, rather than 10% -our methods are still able to achieve modest but significant robustness relative to the baseline. For at different hyperparameter thresholds, as well as more details on our training procedure and additional experiments on MNIST, see the Appendix. Incorporating the Ω graph attribution prior not only leads to a model with more reasonable attributions, but also improves predictive performance by allowing us to incorporate prior biological knowledge into the training process. We downloaded publicly available gene expression and drug response data for patients with acute myeloid leukemia (AML, a type of blood cancer) and tried to predict patients' drug response from their gene expression . For this regression task, an input sample was a patient's gene expression profile plus a one-hot encoded vector indicating which drug was tested in that patient, while the label we tried to predict was drug response (measured by IC50 -the concentration of the drug required to kill half of the patient's tumor cells). To define the graph used by our prior we downloaded the tissue-specific gene interaction graph for the tissue most closely related to AML in the HumanBase database . We find that a two-layer neural network trained with our graph attribution prior (Ω graph) significantly outperforms all other methods in terms of test set performance as measured by R 2 (Figure 2). Unsurprisingly, when we replace the biological graph from HumanBase with a randomized graph, we find that the test performance is no better than the performance of a neural network trained without any attribution prior. Extending the method proposed in Ross et al. (2017b) by applying our novel graph prior as a penalty on the model's gradients, rather than a penalty on the axiomatically correct expected gradient feature attribution, does not perform statistically significantly better than a baseline neural network. We also observe significantly improved test performance when using the prior graph information to regularize a linear LASSO model. Finally, we note that our graph attribution prior neural network significantly outperforms a recent method for utilizing graph information in deep neural networks, graph convolutional neural networks . To see if our model's attributions match biological intuition we conducted Gene Set Enrichment Analysis (a modified Kolmogorov-Smirnov test) to see if our top genes, as ranked by mean absolute feature attribution, were enriched for membership in any pathways (see the Appendix for more details, including the top pathways for each model) . We see that the neural network with the tissue-specific graph attribution prior captures significantly more biologicallyrelevant pathways (increased number of significant pathways after FDR correction) than a neural network without attribution priors (See Figure 2) . Furthermore, the pathways used by our model more closely match with biological expert knowledge -pathways included prognostically useful AML gene expression profiles, as well as important AML-related transcription factors (see Figure 2 and Appendix) . Here, we show that the Ω sparse attribution prior can build sparser models that perform significantly better in settings with limited training data. We use a publicly available healthcare mortality prediction dataset of 13,000 patients , where the 36 features (119 after one-hot encoding) represent A neural network trained with our graph attribution prior (bold) attains the best test performance, while a neural network trained with the same graph penalty on the gradients (italics, adapted from (b)) does not perform significantly better than a standard neural network. Right: A neural network trained with our graph attribution prior has far more significantly captured biological pathways than a standard neural network, and also captures more AML-relevant pathways. medical data such as a patient's age, vital signs, and laboratory measurements. The binary outcome is survival after 10 years. Sparse models in this setting may enable accurate models to be trained with very few labeled patient samples or reduce cost by accurately risk-stratifying patients using few lab tests. We subsample the training and validation sets to each contain only 100 patients, and run each experiment 100 times with a new random subsample to average out variance. We build 3-layer binary classifier neural networks regularized using L1, sparse group lasso (SGL) and sparse attribution prior penalties to predict patient survival, as well as an L1 penalty on gradients adapted for global sparsity from Ross et al. (2017b; a). The regularization strength was tuned from 10 −10 to 10 3 using the validation set for all methods, and the best model for each run was chosen using validation performance over 100 models trained with the chosen parameters (see Appendix). The sparse attribution prior enables more accurate test predictions (Figure 3) and sparser models when little training data is available, with p < 10 −3 by Wilcoxon signed-rank test for all comparisons. We also plot the average cumulative importance of sorted features and find that the sparse attribution prior is much more effective at concentrating importance in the top few features (Figure 3). In particular, L1 penalizing the model's gradients as in Ross et al. (2017a) improves neither sparsity nor performance. A Gini gradient penalty slightly improves performance and sparsity but does not match the sparse attribution prior. Finally, we plot the average sparsity of the models (Gini coefficient) against their validation ROC-AUC across the full range of regularization strengths (Figure 3). The sparse attribution prior attains higher performance and sparsity than other models. Details and for L2 penalties, dropout, and other attribution priors are in the Appendix. There have been many previous attribution methods proposed for deep learning models (; ; ;). We chose to extend integrated gradients because it is easy to differentiate and comes with theoretical guarantees. Training with gradient penalties has also been discussed by existing literature. introduced the idea of regularizing the magnitude of model gradients in order to improve generalization performance on digit classification. Since then, gradient regularization has been used extensively as an adversarial defense mechanism in order to minimize changes to network outputs over small perturbations of the input (; ;). make a connection between gradient-based training for adversarial purposes and network interpretability. adversarial examples may arise due to features that are predictive yet non-intuitive, and stress the need to incorporate human intuition into the training process. There is very little previous work on actually incorporating feature attribution methods into training. formally describe the problem of classifiers having unexpected behavior on inputs not seen in the training distribution, like those generated by asking whether a prediction would change if a particular feature value changed. They describe an active learning algorithm that updates a model based on points generated from a counter-factual distribution. Their work differs from ours in that they use feature attributions to generate counter-factual examples, but do not directly penalize the attributions themselves. Ross et al. (2017b) introduce the idea of training models to have correct explanations, not just good performance. Their method can be seen as a specific instance of our framework, in which the attribution function is gradients and the penalty function is minimizing the gradients of features known to be unimportant for each sample. Our work is more general in two ways. First, we instantiate three different penalty functions that encode human intuition without needing to know which features are unimportant in advance. Second, we propose a novel feature attribution method that can be regularized efficiently using a sampling procedure, and show that doing so provides better generalization performance than regularizing gradients with the same penalty. The immense popularity of deep learning has driven its application in many domains with diverse, complicated prior knowledge. While it is in principle possible to hand-design network architectures to encode this knowledge, we propose a simpler approach. Using attribution priors, any knowledge that can be encoded as a differentiable function of feature attributions can be used to encourage a model to act in a particular way in a particular domain. We also introduce expected gradients, a feature attribution method that is theoretically justified and removes the choice of a single reference value that many existing feature attribution methods require. We further demonstrate that expected gradients naturally integrates with attribution priors via sampling during SGD. The combination allows us to improve model performance by encoding prior knowledge across several different domains. It leads to smoother and more interpretable image models, biological predictive models that incorporate graph-based prior knowledge, and sparser health care models that can perform better in data-scarce scenarios. Attribution priors provide a broadly applicable framework for encoding domain knowledge, and we believe they will be valuable across a wide array of domains in the future. Normally, training with a penalty on any function of the gradients would require solving a differential equation. To avoid this, we adopt a double back-propagation scheme in which the gradients are first calculated with respect to the training loss, and alternately calculated with the loss with respect to the attributions . Our attribution method, expected gradients, requires reference samples to be drawn from the training data. More specifically, for each input in a batch of inputs, we need k additional inputs to calculate expected gradients for that input batch. As long as k is smaller than the batch size, we can avoid any additional data reading by re-using the same batch of input data as a reference batch, as in. We accomplish this by shifting the batch of input k times, such that each input in the batch uses k other inputs from the batch as its reference values. In this section, we elaborate on the explicit form of the attribution priors used in the paper. In general, minimizing the error of a model corresponds to maximizing the likelihood of the data under a generative model consisting of the learned model plus parametric noise. For example, minimizing mean squared error in a regression task corresponds to maximizing the likelihood of the data under the learned model, assuming Gaussian-distributed errors. arg min where θ M LE is the maximum-likelihood estimate of θ under the model Y = f θ (X) + N (0, σ). An additive regularization term is equivalent to adding a multiplicative (independent) prior to yield a maximum a posteriori estimate: Image prior: Our image prior uses a total variation penalty, which has been well-studied. It has been shown in that this penalty is equivalent to placing 0-mean, iid, Laplacedistributed priors on the differences between adjacent pixel values. That is, φ i+1,j − φ i,j ∼ Laplace(0, λ −1) and φ i,j+1 − φ i,j ∼ Laplace(0, λ −1). does not call our penalty "total variation", but it is in fact the widely used anisotropic version of total variation, and is directly implemented in Tensorflow (; ;). Graph prior: The graph prior extends the image prior to arbitrary graphs: Just as the image penalty is equivalent to placing a Laplace prior on adjacent pixels in a regular graph, the graph penalty Ω graph is equivalent to placing a Gaussian prior on adjacent features in an arbitrary graph with Laplacian L G . Sparsity prior: Our sparsity prior uses the Gini coefficient as a penalty, which is written By taking exponentials of this function, we find that minimizing the sparsity regularizer is equivalent to maximizing likelihood under a prior proportional to the following: To our knowledge, this prior does not directly correspond to a named distribution. However, we can note that its maximum value occurs when oneφ i is 1 and all others are 0, as well as that its minimum occurs when allφ i are equal. Since expected gradients reformulates feature attribution as an expected value over two distributions (where samples x are drawn from the data distribution and the linear interpolation parameter α is drawn from U), we wanted to ensure that we are drawing an adequate number of samples for convergence of our attributions when benchmarking the performance of our attribution method. Since our benchmarking code was run on the Correlated Groups 60 synthetic dataset, as a baseline we explain all 1000 samples of this data sampling the full dataset (1000 samples) as samples. To assess convergence to the attributions attained at this number of samples, we measure the mean absolute difference between the attribution matrices ing from different numbers of samples (see Figure 4). We empirically find that our attributions are well-converged by the time 100-200 samples are drawn. Therefore, for the rest of our benchmarking experiments, we used 200 as the number of samples. During training, even using the lowest possible setting of k = 1, we end up drawing far more than 200 samples over the course of an epoch (order of magnitude in the tens of thousands, rather than hundreds). To compare the performance of expected gradients with other feature attribution methods, we used the benchmark metrics proposed in. These metrics were selected as they capture a variety of recent approaches to quantitatively evaluating feature importance estimates. For example, the Keep Positive Mask metric (KPM) is used to test how well an attribution method can find the features that lead to the greatest increase in the model's output. This metric progressively removes features by masking with their mean value, in order from least positive impact on model output to most positive impact on model output, as ranked by the attribution method being evaluated. As more features are masked, the model's output is increased, creating a curve. The KPM metric measures the area under this curve (larger area corresponds to better attribution method). In addition to the KPM metric, 17 other similar metrics (e.g. Remove Absolute Resample, Keep Negative Impute, etc.) were used (see supplementary material of for more details on benchmark metrics). For all of these metrics, a larger number corresponds to a better attribution method. In addition to finding that Expected Gradients outperforms all other attribution methods on nearly all metrics tested for the dataset shown in Table 1 in the main text (the synthetic Correlated Groups 60 dataset proposed in), we also tested all 18 metrics on another dataset proposed in the same paper (Independent Linear 60) and find that Expected Gradients is chosen as the best method by all metrics in that case as well (see Table 2). The Independent Linear 60 dataset is comprised of 60 features, where each feature is a 0-mean, unit variance gaussian random variable plus gaussian noise, and the label to predict is a linear function of these features. The Correlated Groups 60 dataset is essentially the same, but now certain groups of 3 features have 0.99 correlation. For attribution methods to compare, we considered expected gradients (as described in the main text), integrated gradients (as described in), gradients, and random. One unfortunate consequence of choosing an arbitrary baseline point for methods like integrated gradients is that the baseline point by definition is unimportant. That is, if a user chooses the constant black image as the baseline input, then purely black pixels will not be highlighted as important by integrated gradients. This is true for any constant baseline input. Since expected gradients integrates over a dataset as its baseline input, it avoids forcing a particular pixel value to be unimportant. To demonstrate this, we use the inception v4 network trained on the ImageNet 2012 challenge . We restore pre-trained weights from the Tensorflow Slim library . In Figure 5, we plot attribution maps of both expected gradients and integrated gradients as well as raw gradients. Here, we use the constant black image as a baseline input for integrated gradients. For both attribution methods, we use 200 sample/interpolation points. The figure demonstrates that integrated gradients fails to highlight black pixels. We train a VGG16 model from scratch modified for the CIFAR-10 dataset as in. We train using stochastic gradient descent with an initial learning rate of 0.1 and an exponential decay of 0.5 applied every 20 epochs. Additionally we use a momentum level of 0.9. For augmentation, we shift each image horizontally and vertically by a pixel shift uniformly drawn from the range [-3, 3], and randomly rotate each image by an angle uniformly drawn from the range [-15, 15]. We use a batch size of 128. Before training, we normalize the training dataset to have zero mean and unit variance, and standardize the test set with the mean and variance of the training set. We use k = 1 reference sample for our attribution prior while training. When training with attributions over images, we first normalize the per-pixel attribution maps by dividing by the standard deviation before computing the total variation -otherwise, the total variation can be made arbitrarily small without changing model predictions by scaling down the pixel attributions close to 0 In the main text, we demonstrated the robustness of the image attribution prior model with λ chosen as the value that minimized the total variation of attributions while keeping test accuracy within 10% of the baseline model. This corresponds to λ = 0.001 for both gradients and expected gradients if we search through 20 values logarithmically spaced in the range [10 −20, 10 −1]. If instead, we choose the λ that minimizes total variation of attributions while keeping test accuracy equivalent to the baseline model (within 1%), we see that both the attribution prior and regularizing the gradients provides modest robustness to noise. This corresponds to λ = 0.0001 for both gradients and expected gradients. We show this in Figure 6. Test error at λ Baseline test error Total variation at λ Baseline total variation Figure 7: Plotting the trade-off between accuracy and minmizing total variation of expected gradients (left) or gradients (right). For both methods, there is a clear elbow point after which test accuracy degrades to no better than random. The total variation of attributions is judged based on the attribution being penalized: expected gradients for the left plot, gradients for the right plot. For both the gradient-based model and the image attribution prior model, we also plot test accuracy and total variation of the attributions (gradients or expected gradients, respectively) in Figure 7. The λ values we use correspond to the immediate two values before test accuracy on the original test set breaks down entirely for both the gradient and image attribution prior model. We repeat the same experiment on MNIST. We train a CNN with two convolutional layers and a single hidden layer. The convolutional layers have 5x5 filters, a stride length of 1, and 32 and 64 filters total, respectively. Each convolutional layer is followed by a max pooling layer of size 2 with stride length 2. The hidden layer has 1024 units, and a dropout rate of 0.5 during training . Dropout is turned off when calculating the gradients with respect to the attributions. We train with the ADAM optimizer with the default parameters (α = 0.001, β 1 = 0.9, β 2 = 0.999, = 10 −8) . We train with an initial learning rate of 0.0001, with an exponential decay 0.95 for every epoch, for a total of 60 epochs. For all models, we train with a batch size of 50 images, and use k = 1 reference sample per attribution while training. We choose λ by sweeping over values in the range [10 −20, 10 −1]. We choose the λ that minimizes the total variation of attributions such that the test error is within 1% of the test error of the baseline model, which corresponds to λ = 0.01 for both the gradient model and the pixel attribution prior model. In Figure 8, we plot the robustness of the baseline, the model trained with an attribution prior, and the model trained by penalizing the total variation of gradients. We find that on MNIST, penalizing the gradients does similarly to penalizing expected gradients. We also find that it is easier to achieve high test set accuracy and robustness simultaneously. In Figure 8 we plot the attribution maps of the baseline model compared to the model regularized with an image attribution prior. We find that the model trained with an image attribution prior more smoothly highlights the digit in the image. In this section, we detail experiments performed on applying Ω pixel to classifiers trained on the ImageNet 2012 challenge . We omit this section from the main text since, for computational reasons, the hyperparameters chosen in this section may not necessarily be optimal. We use the VGG16 architecture introduced by. For computational reasons, we do not train a model from scratch -instead, we fine-tune using pre-trained weights from the Tensorflow Slim package . We fine-tune on the ImageNet 2012 training set using the original cross entropy loss function in addition to Ω pixel using asynchronous gradient updates with a batch size of 16 split across 4 Nvidia 1080 Ti GPUs. During fine-tuning, we use the same training procedure outlined by. This includes randomly cropping training images to 224 × 224 pixels, randomly flipping images horizontally, and normalizing each image to the same range. To optimize, we use gradient descent with a learning rate of 0.00001 and a momentum of 0.9. We use a weight decay of 0.0005, and set λ = 0.00001 for the first epoch of fine-tuning, and λ = 0.00002 for the second epoch of fine-tuning. As with the MNIST experiments, we normalize the feature attributions before taking total variation. We plot the attribution maps on images from the validation set using expected gradients for the original VGG16 weights (Baseline), as well as fine-tuned for 320,292 steps (Image Attribution Prior 1 Epoch) and fine-tuned for 382,951 steps, in which the last 60,000 steps were with twice the λ penalty (Image Attribution Prior 1.25 Epochs). Figure 9 demonstrates that fine-tuning using our penalty in sharper and more interpretable image maps than the baseline network. In addition, we also Attribution Maps using Expected Gradients Figure 9: Attribution maps generated by Expected Gradients on the VGG16 architecture before and after fine-tuning using an attribution prior. plot the attribution maps generated by two other methods: integrated gradients (Figure 10) and raw gradients (Figure 11). Networks regularized with our attribution prior show more clear attribution maps using any of the above methods, which implies that the network is actually viewing pixels more smoothly, independent of the attribution method chosen. We note that in practice, we observe similar trade-offs between test accuracy and interpretability/robustness mentioned in. We show the validation performance of the VGG16 network before and after fine-tuning in Table 3 and observe that the validation accuracy does decrease. However, due to the computational cost even of fine-tuning on ImageNet, we did not perform a hyperparameter search for the optimal learning rate or λ penalty. We anticipate that with more time and computational resources, we could achieve a better trade-off between interpretable attribution maps and test accuracy. Attribution Maps using Gradients Figure 11: Attribution maps generated by raw gradients on the VGG16 architecture before and after fine-tuning using an attribution prior. To ensure a quality signal for prediction while removing noise and batch effects, it is necessary to carefully preprocess RNA-seq gene expression data. For the biological data experiments, RNA-seq were preprocessed as follows: 1. First, raw transcript counts were converted to fragments per kilobase of exon model per million mapped reads (FPKM). FPKM is more reflective of the molar amount of a transcript in the original sample than raw counts, as it normalizes the counts for different RNA lengths and for the total number of reads . FPKM is calculated as follows: Where X i is the raw counts for a transcript, l i is the effective length of the transcript, and N is the total number of counts. 2. Next, we removed non-protein-coding transcripts from the dataset. 3. We removed transcripts that were not meaningfully observed in our dataset by dropping any transcript where > 70% measurements across all samples were equal to 0. 4. We log 2 transformed the data 5. We standardized each transcript across all samples, such that the mean for the transcript was equal to zero and the variance of the transcript was equal to one: where X i is the expression for a transcript, µ i is the mean expression of that transcript, and σ i is the standard deviation of that transcript across all samples. 6. Finally, we corrected for batch effects in the measurements using the ComBat tool available in the sva R package . To increase the number of samples in our dataset, we opted to use the identity of the drug being tested as a feature, rather than one of a number of possible output tasks in a multi-task prediction. This follows from prior literature on training neural networks to predict drug response . This gave us 30,816 samples (covering 218 patients and 145 anti-cancer drugs). Defining a sample as a drug and a patient, however, meant we had to choose carefully how to stratify samples into our train, validation, and test sets. While it is perfectly legitimate in general to randomly stratify samples into these sets, we wanted to specifically focus on how well our model could learn trends from gene expression data that would generalize to novel patients. Therefore, we stratified samples at a patient-level rather than at the level of individual samples (e.g. no samples from any patient in the test set ever appeared in the training set). We split 20% of the total patients into a test set (6,155 samples), and then split 20% of the training data into a validation set for hyperparameter selection (4,709 samples). LASSO: We used the scikit-learn implementation of the LASSO . We tested a range of α parameters ranging from 10 −9 to 1, and found that the optimal value for α was 10 −2 by mean squared error on the validation set. Graph LASSO: For our Graph LASSO we used the Adam optimizer in TensorFlow , with a learning rate of 10 −5 to optimize the following loss function: Where w ∈ R d is the weights vector of our linear model and L G is the graph laplacian of our HumanBase network . In particular, we downloaded the "Top Edges" version of the hematopoietic stem cell network, which is thresholded to only have non-zero values for pairwise interactions that have a posterior probability greater than 0.1. We used the value of λ selected as optimal in the regular LASSO model (10 −2, corresponds to the α parameter in scikit-learn), and then tuned over a range of ν values ranging from 10 −3 to 100. We found that a value of 10 was optimal according to MSE on the validation set. Neural networks: We tested a variety of hyperparameter settings and network architectures via validation set performance to choose our best neural networks. We tested the following feed-forward network architectures (where each element in a list denotes the size of a hidden layer):,,, and. We tested a range of L1 penalties on all of the weights of the network, from 10 −7 to 10 −2. All models attempted to optimize a least squares loss using the Adam optimizer, with learning rates again selected by hyperparameter tuning from 10 −5 to 10 −3. Finally, we implemented an early stopping parameter of 20 rounds to select the number of epochs of training (training is stopped after no improvement on validation error for 20 epochs, and number of epochs is chosen based on optimal validation set error). We found the optimal architecture (chosen by lowest validation set error) had two hidden layers of size 512 and 256, an L1 penalty on the weights of 10 −3 and a learning rate of 10 −5. We additionally found that 120 was the optimal number of training epochs. Attribution prior neural networks: To apply our attribution prior to our neural networks, after tuning our networks to the optimal conditions described above, we added extra epochs of fine-tuning where we ran an alternating minimization of the following objectives: Following Ross et al. (2017b), we selected ν to be 100 so that the Ω graph term would be initially equal in magnitude to the least squares and L1 loss term. We found that 5 extra epochs of tuning were optimal by validation set error. We drew k = 10 samples for our attributions. To test our attribution prior using gradients as the feature attribution method (rather than expected gradients), we followed the exact same procedure only we now replaceφ with the average magnitude of the gradients rather than the average magnitude of the expected gradients. Graph convolutional networks: We followed the implementation of graph convolution described in. The architectures searched were as follows: in every network we first had a single graph convolutional layer (we were limited to one graph convolution layer due to memory constraints on each Nvidia GTX 1080-Ti GPU that we used), followed by two fully connected layers of sizes, sizes, or sizes. We tuned over a wide range of hyperparameters, including L2 penalties on the weights ranging from 10 −5 to 10 −2, L1 penalties on the weights ranging from 10 −5 to 10 −2, learning rates of 10 −5 to 10 −3, and dropout rates ranging from 0.2 to 0.8. We found the optimal hyperparameters based on validation set error were two hidden layers of size 512 and size 256, an L2 penalty on the weights of 10 −5, a learning rate of 10 −5, and a dropout rate of 0.6. We again used an early stopping parameter and found that 47 epochs was the optimal number. Looking at the ant R 2 for prediction, we see that using the graph prior improves predictive performance of a linear model compared to L1-regularization alone (Graph LASSO vs. LASSO). However, we are able to attain a similar degree of predictive performance simply by switching from a linear model to a neural network that does not use the prior graph information at all. Our best performing model was the neural network with graph attribution prior. We use a t-test to compare the R 2 attained from 10 independent retrainings of the neural network to the R 2 attained from 10 independent retrainings of the attribution prior model and find that predictive performance is significantly higher for the model with the graph attribution prior (p = 0.004696). Since we added graph-regularization to our model by fine-tuning, we wanted to ensure that the improved performance did not simply come from the additional epochs of training without the attribution prior. We use a t-test to compare the R 2 attained from 10 independent retrainings of the regular neural network to the R 2 attained from 10 independent retrainings of the neural network with the same number of additional epochs that were optimal when adding the graph penalty (see Figure 12). We found no significant difference between the test error of these models (p = 0.7565). To ensure that the increased performance in the attribution prior model was due to real biological information, we replaced the gene-interaction graph with a randomized graph (symmetric matrix with identical number of non-zero entries to the real graph, but entries placed in random positions). We then compared the R 2 attained from 10 independent retrainings of a neural network with no graph attribution prior to 10 independent retrainings of an neural network regularized with the random graph and found that test error was not significantly different between these two models (p = 0.5039). We also compared to graph convolutional neural networks, and found that our network with a graph attribution prior outperformed the graph convolutional neural network (p = 0.0073). To ensure that the models were learning the attribution metric we tried to optimize for, we compared the explanation graph penalty (φ T L Gφ) between the unregularized and regularized models, and found that the graph penalty was on average nearly two orders of magnitude lower in the regularized models (see Figure 14). We also examined the pathways that our top attributed genes were enriched for using Gene Set Enrichment Analysis and found that not only did our graph attribution prior model capture far more significant pathways, it also captured far more AML-relevant pathways (see Figure 13). We defined AML-relevant by a query for the term "AML," as well as queries for AML-relevant transcription factors. I SPARSITY EXPERIMENTS I.1 DATA DESCRIPTION Our data for the sparsity experiments used data from the NHANES I survey , and contained 36 variables (expanded to 119 features by one-hot encoding of categorical variables) gathered from 13,000 patients. The measurements include demographic information like age, sex, and BMI, as well as physiological measurements like blood, urine, and vital sign measurements. The prediction task is a binary classification of whether the patient was still alive or not 10 years after data were gathered. Data were mean-imputed and standardized so that each feature had 0 mean and unit variance. A fixed train/validation/test split of 7500/2500/3000 patients was used, with all hyperparameter tuning on the For each of the 100 experimental replicates, 100 data points were sampled uniformly at random from the training and validation sets to yield a 100/100/3000 split. We trained a range of neural networks to predict survival in the NHANES data. The architecture, nonlinearities, and training rounds were all held constant at values that performed well on an unregularized network, and the type and degree of regularization were varied. All models used ReLU activations and a 2-class softmax output; in addition, all models ran for 20 epochs with an SGD optimizer with learning rate 1.0 on the size-100 training data. The entire 100-sample training set fit in one batch. All 100 samples in the training set were used for expected gradients attributions during training and evaluation. Architecture: We considered a range of architectures including single-hidden-layer 32-node, 128-node, and 512-node networks, two-layer Regularizers: We tested a large array of regularizers. See section I.4 for details on how optimal regularization strength was found for each regularizer. Italicized entries were evaluated in the smalldata experiments shown in the main text and I.6. For these penalties, the optimal regularization strength from validation-set tuning is listed. Non-italicized entries were evaluated on a sparsity/AUC plot using the full data (subsection I.7), but were not evaluated in the small-data experiments. • Sparse Attribution Prior -Ω sparse as defined in the main text. The best performing models for each replicate had an average regularization strength over the 100 runs of λ = 1.60 × 10 −1. • Mixed L1/Sparse Attribution Prior -Motivated by the observation that the Gini coefficient is normalized and only penalizes the relative distribution of global feature importances, we attempted adding an L1 penalty to ensure the attributions also remain small in an absolute sense. This did not in improvements to performance or sparsity in full-data experiments (subsection I.7). • Sparse Group Lasso -Rather than simply encouraging the weights of the first-layer matrix to be zero, the sparse group lasso also encourages entire columns of the matrix to shrink together by placing an L2 penalty on each column. As in , we added a weighted sum of column-wise L2 norms to the L1 norms of each layer's matrix, without tuning the relative contribution of the two norms (equal weight on both terms). We also follow and penalize the absolute value of the biases of each layer as well. The average optimal regularization strength was λ = 1.62 × 10 −2. • Sparse Group Lasso First Layer -This penalty was similar to , but instead of penalizing all weights and biases, only the first-layer weight matrix was penalized. This model outperformed the SGL implementation adapted from , but did not outperform the sparse attribution prior, the Gini penalty on gradients, or the unregularized model. The average optimal regularization strength was λ = 2.16 × 10 −3. • L1 First-Layer -In order to facilitate sparsity, we placed an L1 penalty on the input layer of the network. No regularization was placed on subsequent layers. • L1 All Layers -This penalty places an L1 penalty on all matrix multiplies in the network (not just the first layer). The average optimal regularization strength was λ = 2.68 × 10 1. • L1 Expected Gradients -This penalty penalizes the L1 norm of the vector of global feature attributions,φ i (analogous to how LASSO penalizes the weight vector in linear regression). • L2 First-Layer -This penalty places an L2 penalty on the input layer of the network, with no regularization on subsequent layers. • L2 All Layers -This penalty places an L2 penalty on all matrix multiplies in the network (not just the first layer). • L2 Expected Gradients -This penalty penalizes the L2 norm of the vector of global feature attributions,φ i (analogous to how ridge regression penalizes the weight vector in linear models). • Dropout -This penalty "drops out" a fraction p of nodes during training, but uses all nodes at test time. • Baseline (Unregularized) -Our baseline model used no regularization. • L1 Gradients -To achieve the closest match to work by (b ;a), we placed a L1 penalty on the global gradients attribution vector of the network (mean across all samples of the absolute value of the gradient for each feature). This is similar to the "neural LASSO" of (a), but with a goal of global sparsity (a model that uses few features overall) rather than local sparsity (a model that uses a small number of possibly different features for each sample). The average optimal regularization strength was λ = 1.70 × 10 −2. • Gini Gradients -An intermediate step between (b; a) and our sparse attribution prior would use gradients as an attribution, but our Gini coefficient-based sparsity metric as a penalty. In this model we encouraged a large Gini coefficient of the mean absolute value of the gradients attributions of the model, averaged over all samples. The average optimal regularization strength was λ = 1.33 × 10 −1. The maintext figures, with small-data experiments repeated 100 times, compared the sparse attribution prior to methods previously used in literature on sparsity in deep networks -the L1 penalty on all layers, the sparse group lasso methods , and the L1 gradients penalty (a). We also evaluated the Gini gradients penalty in these experiments. The other methods were not evaluated in the repeated small-data experiments shown in the maintext for space reasons, because there was less literature support, and because preliminary analysis (Figure 18) showed worse performance on sparsity with no benefit to accuracy. We selected the hyperparameters for our models based on the best validation performance over all parameters considered. There was one free parameter to tune for all methods other than the unregularized baseline (no tuning parameter) and the mixed L1/Sparse Attribution Prior model in our preliminary full-data experiments (two parameters -L1 and attribution penalty). We searched all L1, L2, SGL and attribution prior penalties with 131 points sampled on a log scale over [10 −10, 10 3] (Figure 15). Some penalties, including the sparse attribution prior, mixed, gradient, and sparse group lasso penalties, produced NaN outputs for certain regularization settings. We retried several times when NaNs occurred, but if the problem persisted after multiple restarts, the parameter setting was skipped. In preliminary experiments on the full data, we tuned the dropout probability with 130 points linearly spaced over. The mixed L1/Sparse Attribution Prior model was tuned in a 2D grid, with 11 L1 penalties sampled on a log scale over [10 −7, 10 3] and 11 attribution prior penalties sampled on a log scale over [10 −10, 10 0]. Performance and Sparsity Bar Plots: The performance bar graph (Figure 3, top left) was generated by plotting mean test ROC-AUC of the best model of each type (chosen by validation ROC-AUC) averaged over each of the 100 subsampled datasets, with confidence intervals given by 2 times the standard error over the 100 replicates. The sparsity bar graph (Figure 3, bottom left) was constructed by the same process, but with Gini coefficients rather than ROC-AUCs. Feature Importance Distribution Plot: The distribution of feature importances was plotted in the main text as a Lorenz curve (Figure 3, bottom right): for each model, the features were sorted by global attribution valueφ i, and the cumulative normalized value of the lowest q features was plotted, from 0 at q = 0 to 1 at q = p. A lower area under the curve indicates more features have relatively small attribution values, indicating the model is sparser. Because 100 replicates were run on small subsampled datasets, the Lorenz curve for each model was plotted using the averaged mean absolute sorted feature importances, over all replicates. Thus, for a given model, the q = 1 point represented the mean absolute feature importance of the least important feature averaged over each replicate, q = 2 added the mean importance for the second least important feature averaged over each replicate, and so on. Performance vs Sparsity Plot: Validation ROC-AUC and model sparsity were calculated for each of the 131 regularization strengths, and averaged over each of the 100 replicates. These were plotted Figure 15: Validation performance and gini coefficient as a function of regularization strength for all models, averaged over 100 subsampled datasets. Blank areas indicate where some of the 100 models diverged for a given hyperparameter setting as described in subsection I.4. on a scatterplot to show the possible range of model sparsities and ROC-AUC performances (Figure 3, top right), as well as the tradeoff between sparsity and performance. The sparse attribution prior was the only model capable of achieving a smooth tradeoff between sparsity and performance, as shown with the blue dashed line. Statistical significance: Statistical significance of the sparse attribution prior performance was assessed by comparing the ROC-AUCs of the best-performing sparse attribution prior models on each of the 100 subsampled datasets to those of the best-performing other models (L1 gradients, L1 weights, SGL, and unregularized). Significance was assessed by Wilcoxon signed-rank test, paired by subsampled dataset. The same process was used to calculate significance of model sparsity as measured by the Gini coefficient. Additional SGL Penalty: We show performance and sparsity for the penalties studied in the maintext plus first-layer SGL as bar plots, with confidence intervals from 100 experimental replicates (Figure 16 top two plots). The sparse attribution prior outperforms other methods by a wide margin. The Gini penalty on plain gradients performs slightly better than other methods, but not significantly. Thus it seems that the combination of both EG and Gini coefficient based penalties leads to better performance. The first-layer SGL slightly increases sparsity but does not outperform an unregularized model in ROC-AUC. We also plot average performance on the validation set against average sparsity for the full range of searched parameters (Figure 16 bottom). Again, no method is able to compete with the sparse attribution prior in sparsity or performance, but the plain gradients Gini penalty also in a small increase in sparsity, as do a small number of parameter settings for the first-layer SGL. There is a single point in the scatterplot for which first-layer SGL appears to outperform the sparse attribution prior in validation performance; however, this does not translate into superior test performance in the bar plots nor is there a smooth tradeoff curve between sparsity and AUC as with the sparse attribution prior. Feature Importance Summary: We also show summaries of the mean absolute feature importance for the top 20 features in each model in Figure 17. We narrowed the range of possible penalties by studying the sparsity and performance achieved by additional penalties in preliminary experiments on the full dataset, without subsampling to study small-data performance. Performance (area under an ROC curve, AUC-ROC) was plotted as a function of sparsity (Gini coefficient) for all models. Figure 18 shows sparsity and validation performance for the same coarse initial parameter sweep as in the initial data, as well as sparsity and test performance for a fine sweep within the region of lowest cross-entropy for all models. The third image in the figure is a zoomed version to provide more detail on the best-performing models. The L1, SGL, and sparse attribution prior penalties were the best performing and the sparsest within these experiments. Gini penalty: Gradients L1 penalty: The sparse attribution prior provides the best performance, and the Gini penalty on gradients provides the next best. First-layer SGL does not improve over unregularized models. Middle: The sparse attribution prior also builds the sparsest models, though the Gini gradients penalty also has slightly higher sparsity than the other models. First-layer SGL is slightly sparser than unregularized models. Bottom: Scatterplot of model sparsity and validation performance for all models in the maintext experiments, averaged across the 100 replicates. The sparse attribution prior achieves the highest performance for most parameters, though there is one parameter setting for which first-layer SGL outperforms it in validation loss (SGL does not end up winning in final test performance though, as seen in the bar plots). The only other model that often builds sparse models while maintaining performance is the Gini-based gradient penalty, though it is much less sparse.
A method for encouraging axiomatic feature attributions of a deep model to match human intuition.
995
scitldr
Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs. As a , using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption. On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) and gated recurrent units (GRUs) on various sequential models including sequence classification and language modeling. We demonstrate that our method achieves competitive on the aforementioned tasks while using binary/ternary weights during the runtime. On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights. Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12x memory saving and 10x inference speedup compared to the full-precision hardware implementation design. Convolutional neural networks (CNNs) have surpassed human-level accuracy in various complex tasks by obtaining a hierarchical representation with increasing levels of abstraction BID3; BID31 ). As a , they have been adopted in many applications for learning hierarchical representation of spatial data. CNNs are constructed by stacking multiple convolutional layers often followed by fully-connected layers BID30 ). While the vast majority of network parameters (i.e. weights) are usually found in fully-connected layers, the computational complexity of CNNs is dominated by the multiply-accumulate operations required by convolutional layers BID46 ). Recurrent neural networks (RNNs), on the other hand, have shown remarkable success in modeling temporal data BID36; BID12; BID6;; ). Similar to CNNs, RNNs are typically over-parameterized since they build on high-dimensional input/output/state vectors and suffer from high computational complexity due to their recursive nature BID45; BID14 ). As a , the aforementioned limitations make the deployment of CNNs and RNNs difficult on mobile devices that require real-time inference processes with limited hardware resources. Several techniques have been introduced in literature to address the above issues. In BID40; BID22; BID29; BID42 ), it was shown that the weight matrix can be approximated using a lower rank matrix. In BID34; BID14;; BID1 ), it was shown that a significant number of parameters in DNNs are noncontributory and can be pruned without any performance degradation in the final accuracy performance. Finally, quantization approaches were introduced in; BID33; BID9; BID26; BID20; BID39; BID19; BID49;; ) to reduce the required bitwidth of weights/activations. In this way, power-hungry multiply-accumulate operations are replaced by simple accumulations while also reducing the number of memory accesses to the off-chip memory. Considering the improvement factor of each of the above approaches in terms of energy and power reductions, quantization has proven to be the most beneficial for hardware implementations. However, all of the aforementioned quantization approaches focused on optimizing CNNs or fully-connected networks only. As a , despite the remarkable success of RNNs in processing sequential data, RNNs have received the least attention for hardware implementations, when compared to CNNs and fully-connected networks. In fact, the recursive nature of RNNs makes their quantization difficult. In BID18 ), for example, it was shown that the well-known BinaryConnect technique fails to binarize the parameters of RNNs due to the exploding gradient problem ). As a , a binarized RNN was introduced in BID18 ), with promising on simple tasks and datasets. However it does not generalize well on tasks requiring large inputs/outputs BID45 ). In BID45; BID20 ), multi-bit quantized RNNs were introduced. These works managed to match their accuracy performance with their full-precision counterparts while using up to 4 bits for data representations. In this paper, we propose a method that learns recurrent binary and ternary weights in RNNs during the training phase and eliminates the need for full-precision multiplications during the inference time. In this way, all weights are constrained to {+1, −1} or {+1, 0, −1} in binary or ternary representations, respectively. Using the proposed approach, RNNs with binary and ternary weights can achieve the performance accuracy of their full-precision counterparts. In summary, this paper makes the following contributions:• We introduce a method for learning recurrent binary and ternary weights during both forward and backward propagation phases, reducing both the computation time and memory footprint required to store the extracted weights during the inference.• We perform a set of experiments on various sequential tasks, such as sequence classification, language modeling, and reading comprehension. We then demonstrate that our binary/ternary models can achieve near state-of-the-art with greatly reduced computational complexity. • We present custom hardware to accelerate the recurrent computations of RNNs with binary or ternary weights. The proposed dedicated accelerator can save up to 12× of memory elements/bandwidth and speed up the recurrent computations by up to 10× when performing the inference computations. During the binarization process, each element of the full-precision weight matrix W ∈ R d I ×d J with entries w i,j is binarized by w i,j = α i,j w B i,j, where α i,j ≥ 0, i ∈ {1, . . ., d I}, j ∈ {1, . . ., d J} and w B i,j ∈ {−1, +1}. In BinaryConnect ), the binarized weight element w B i,j is obtained by the sign function while using a fixed scalling factor α for all the elements: w B i,j = α × sign(w i,j). In TernaryConnect BID33 ), values hesitating to be either +1 or -1 are clamped to zero to reduce the accuracy loss of binarization: w i,j = α i,j w T i,j where w T i,j ∈ {−1, 0, +1}. To further improve the precision accuracy, TernaryConnect stochastically assigns ternary values to the weight elements by performing w i,j = α×Bernoulli(|w i,j |)×sign(w i,j) while using a fixed scaling factor α for each layer. Ternary weight networks (TWNs) were then proposed to learn the factor α by minimizing the L2 distance between the full-precision and ternary weights for each layer. BID49 introduced DoReFa-Net as a method that can learn different bitwidths for weights, activations and gradients. Since the quantization functions used in the above works are not differentiable, the derivative of the loss l w.r.t the full-precision W is approximated by DISPLAYFORM0 where W B and W T denote binarized and ternarized weights, respectively. introduced the trained ternary quantization (TTQ) method that uses two assymetric scaling parameters (α 1 for positive values and α 2 for negative values) to ternarize the weights. In loss-aware binarization (LAB) BID18 ), the loss of binarization was explicitly considered. More precisely, the loss w.r.t the binarized weights is minimized using the proximal Newton algorithm. BID17 extended LAB to support different bitwidths for the weights. This method is called loss-aware quantization (LAQ). Recently, BID13 introduced a new method that builds the full-precision weight matrix W as k multiple binary weight matrices: BID45 also uses a binary search tree to efficiently derive the binary codes β k i,j, improving the prediction accuracy. While using multiple binary weight matrices reduces the bitwidth by a factor of 32× compared to its full-precision counterpart, it increases not only the number of parameters but also the number of operations by a factor of k BID45 ). DISPLAYFORM1 Among the aforementioned methods, only works of BID45 and BID17 targeted RNNs to reduce their computational complexity and outperformed all the aforementioned methods in terms of the prediction accuracy. However, they have shown promising only on specific temporal tasks: the former targeted only the character-level language modeling task on small datasets while the latter performs the word-level language modeling task and matches the performance of the full-precision model when using k = 4. Therefore, there are no binary models that can match the performance of the full-precision model on the word-level language modeling task. More generally, there are no binary/ternary models that can perform different temporal tasks while achieving similar prediction accuracy to its full-precision counterpart is missing in literature. Despite the remarkable success of RNNs in processing variable-length sequences, they suffer from the exploding gradient problem that occurs when learning long-term dependencies BID2; BID38 ). Therefore, various RNN architectures such as Long Short-Term Memory (LSTM) BID16 ) and Gated Recurrent Unit (GRU) BID7 ) were introduced in literature to mitigate the exploding gradient problem. In this paper, we mainly focus on the LSTM architecture to learn recurrent binary/ternary weights due to their prevalent use in both academia and industry. The recurrent transition of LSTM is obtained by: DISPLAYFORM0 where DISPLAYFORM1 d h denote the recurrent weights and bias. The parameters h ∈ R d h and c ∈ R d h are hidden states. The logistic sigmoid function and Hadamard product are denoted as σ and, respectively. The updates of the LSTM parameters are regulated through a set of gates: f t, i t, o t and g t. Eq. FORMULA2 shows that the main computational core of LSTM is dominated by the matrix multiplications. The recurrent weight matrices W f h, W ih, W oh, W gh, W f x, W ix, W ox and W gx also contain the majority of the model parameters. As such, we aim to compensate the computational complexity of the LSTM cell and reduce the number of memory accesses to the energy/power-hungry DRAM by binarizing or ternarizing the recurrent weights. BID18 showed that the methods ignoring the loss of the binarization process fail to binarize the weights in LSTMs despite their remarkable performance on CNNs and fully-connected networks. In BinaryConnect as an example, the weights are binarized during the forward computations by thresholding while Eq. is used to estimate the loss w.r.t. the full-precision weights without considering the quantization loss. When the training is over, both the full-precision and binarized weights can be then used to perform the inference computations of CNNs and fully-connected networks BID33 ). However, using the aforementioned binarization approach in vanilla LSTMs fails to perform sequential tasks due to the gradient vanishing problem as discussed in BID18 ). To further explore the cause of this problem, we performed a set of experiments. We first measured the probability density of the gates and hidden states of a binarized LSTM and observed that the binarized LSTM fail to control the flow of information (see Appendix A for more details). More specifically, the input gate i and the output gate o tend to let all information through, the gate g tends to block all information, and the forget gate f cannot decide to let which information through. In the second experiment, we measured the probability density of the input gate i before its non-linear function applied (i.e., i p = W ih h t−1 + W ix x t + b i) at different iterations during the training process. In this experiment, we observed that the binarization process changes the probability density of the gates and hidden states during the training process, ing in all positive values for i p and values centered around 1 for the input gate i (see Appendix A for more details). To address the above issue, we propose the use of batch normalization in order to learn binarized/ternarized recurrent weights. It is well-known that a network trained using batch normalization is less susceptible to different settings of hyperparameters and changes in the distribution of the inputs to the model BID21 ). The batch normalization transform can be formulated as follows: DISPLAYFORM0 where x and denote the unnormalized vector and a regularization hyperparameter. The mean and standard deviation of the normalized vector are determined by model parameters φ and γ. The statistics E(x) and V(x) also denote the estimations of the mean and variance of the unnormalized vector for the current minibatch, respectively. Batch normalization is commonly applied to a layer where changing its parameters affects the distributions of the inputs to the next layer. This occurs frequently in RNN where its input at time t depends on its output at time t − 1. Several works have investigated batch normalization in RNNs BID8;; BID0 ) to improve their convergence speed and performance. The main goal of our method is to represent each element of the full-precision weight W either as DISPLAYFORM1, where α is a fixed scaling factor for all the weights and initialized from BID11. To this end, we first divide each weight by the factor α to normalize the weights. We then compute the probability of getting binary or ternary values for each element of the full-precision matrix W by DISPLAYFORM2 for binarization and DISPLAYFORM3 for ternarization, where w N i,j denotes the normalized weight. Afterwards, we stochastically sample from the Bernoulli distribution to obtain binarized/ternarized weights as follows DISPLAYFORM4 Finally, we batch normalize the vector-matrix multiplications between the input and hidden state vectors with the binarized/ternarized weights W DISPLAYFORM5 and W B/T gx. More precisely, we perform the recurrent computations as In fact, batch normalization cancels out the effect of the binarization/ternarization on the distribution of the gates and states during the training process. Moreover, batch normalization regulates the scale of binarized/ternarized weights using its parameter φ in addition to α. DISPLAYFORM6 So far, we have only considered the forward computations. During the parameter update, we use full-precision weights since the parameter updates are small values. To update the full-precision weights, we use Eq. to estimate its gradient since the binarization/ternarization functions are indifferentiable (See Algorithm 1 and its details in Appendix B). It is worth noting that using batch normalization makes the training process slower due to the additional computations required to perform Eq.. In this section, we evaluate the performance of the proposed LSTMs with binary/ternary weights on different temporal tasks to show the generality of our method. We defer hyperparameters and tasks details for each dataset to Appendix C due to the limited space. For the character-level modeling, the goal is to predict the next character and the performance is evaluated on bits per character (BPC) where lower BPC is desirable. We conduct quantization experiments on Penn Treebank BID35 ), War & Peace BID25 ) and Linux Kernel BID25 ) corpora. For Penn Treebank dataset, we use a similar LSTM model configuration and data preparation to BID37. For War & Peace and Linux Kernel datasets, we also follow the LSTM model configurations and settings in BID25 ). TAB0 summarizes the performance of our binarized/ternarized models compared to state-of-the-art quantization methods reported in literature. All the models reported in TAB0 use an LSTM layer with 1000, 512 and 512 units on a sequence length of 100 for the experiments on Penn Treebank BID35 ), War & Peace BID25 ) and Linux Kernel BID25 ) corpora, respectively. The experimental show that our model with binary/ternary weights outperforms all the existing quantized models in terms of prediction accuracy. Moreover, our ternarized model achieves the same BPC values on War & Peace and Penn Treebank datasets as the full-precision model (i.e., baseline) while requiring 32× less memory footprint. It is worth mentioning the accuracy loss of our ternarized model over the full-precision baseline is small. In order to evaluate the effectiveness of our method on a larger dataset for the character-level language modeling task, we use the Text8 dataset which was derived from Wikipedia. For this task, we use one LSTM layer of size 2000 and train it on sequences of length 180. We follow the data preparation approach and settings of BID37. The test are reported in TAB1. While our models use recurrent binary or ternary weights during runtime, they achieve acceptable performance when compared to the full-precision models. Similar to the character-level language modeling, the main goal of word-level modeling is to predict the next word. However, this task deals with large vocabulary sizes, making the model quantization difficult. BID45 introduced a multi-bit quantization method, referred to as alternating method, as a first attempt to reduce the complexity of the LSTMs used for this task. However, the alternating method only managed to almost match its performance with its full-precision counterpart using 4 bits (i.e., k = 4). However, there is a huge gap in the performance between its quantized model with 2 bits and the full-precision one. To show the effectiveness of our method over the alternating method, we use a small LSTM of size 300 similar to BID45 ) for a fair comparison. We also examine the prediction accuracy of our method over the medium and large models introduced by BID47: the medium model contains an LSTM layer of size 650 and the large model contains two LSTM layers of size 1500. We also use the same settings described in BID36 ) to prepare and train our model. TAB2 summarizes the performance of our models in terms of perplexity. The experimental show that our binarized/ternarized models outperform the alternating method using 2-bit quantization in terms of both perplexity and the memory size. Moreover, our medium-size model with binary weights also has a substantial improvement over the alternating method using 4-bit quantization. Finally, our models with recurrent binary and ternary weights yield a comparable performance compared to their full-precision counterparts. We perform the MNIST classification task (Le et al. FORMULA0) by processing each image pixel at each time step. In this task, we process the pixels in scanline order. We train our models using an LSTM with 100 nodes, followed by a softmax classifier layer. TAB3 reports the test performance of our models with recurrent binary/ternary weights. While our binary model uses a lower bit precision and fewer operations for the recurrent computations compared to the alternating models, its loss of accuracy is small. On the other hand, our ternary model requires the same memory size and achieves the same accuracy as the alternating method while requiring 2× fewer operations. Hermann et al. FORMULA0 recently introduced a challenging task that involves reading and comprehension of real news articles. More specifically, the main goal of this task is to answer questions about the context of the given article. To this end, they also introduced an architecture, called Attentive Reader, that exploits an attention mechanism to spot relevant information in the document. Attentive Reader uses two bidirectional LSTMs to encode the document and queries. To show the generality and effectiveness of our quantization method, we train Attentive Reader with our method to learn recurrent binary/ternary weights. We perform this task on the CNN corpus (Hermann et al. FORMULA0) by replicating Attentive Reader and using the setting described in BID15 ). TAB4 shows the test accuracy of binarized/ternarized Attentive Reader. The simulation show that our Attentive Reader with binary/ternary weights yields similar accuracy rate to its full-precision counterpart while requiring 32× smaller memory footprint. As discussed in Section 4, the training models ignoring the quantization loss fail to quantize the weights in LSTM while they perform well on CNNs and fully-connected networks. To address this problem, we proposed the use of batch normalization during the quantization process. To justify the importance of such a decision, we have performed different experiments over a wide range of temporal tasks and compared the accuracy performance of our binarization/ternarization method with binaryconnect as a method that ignores the quantization loss. The experimental showed that binaryconnect method fails to learn binary/ternary weights. On the other hand, our method not only learns recurrent binary/ternary weights but also outperforms all the existing quantization methods in literature. It is also worth mentioning that the models trained with our method achieve a comparable accuracy performance w.r.t. their full-precision counterpart. Figure 1(a) shows a histogram of the binary/ternary weights of the LSTM layer used for characterlevel language modeling task on the Penn Treebank corpus. In fact, our model learns to use binary or ternary weights by steering the weights into the deterministic values of -1, 0 or 1. Despite the CNNs or fully-connected networks trained with binary/ternary weights that can use either real-valued or binary/ternary weights, the proposed LSTMs trained with binary/ternary can only perform the inference computations with binary/ternary weights. Moreover, the distribution of the weights is dominated by non-zero values for the model with ternary weights. To show the effect of the probabilistic quantization on the prediction accuracy of temporal tasks, we adopted the ternarized network trained for the character-level language modeling tasks on the Penn Treebank corpus (see Section 5.1). We measured the prediction accuracy of this network on the test set over 10000 samples and reported the distribution of the prediction accuracy in FIG0. FIG0 (b) shows that the variance imposed by the stochastic ternarization on the prediction accuracy is very small and can be ignored. It is worth mentioning that we also have observed a similar behavior for other temporal tasks used in this paper. FIG1 illustrates the learning curves and generalization of our method to longer sequences on the validation set of the Penn Treebank corpus. In fact, the proposed training algorithm also tries to retains the main features of using batch normalization, i.e., fast convergence and good generalization over long sequences. FIG1 (a) shows that our model converges faster than the full-precision LSTM for the first few epochs. After a certain point, the convergence rate of our method decreases, that prevents the model from early overfitting. FIG1 (b) also shows that our training method generalizes well over longer sequences than those seen during training. Similar to the full-precision baseline, our binary/ternary models learn to focus only on information relevant to the generation of the next target character. In fact, the prediction accuracy of our models improves as the sequence length increases since longer sequences provides more information from past for generation of the next target character. While we have only applied our binarization/ternarization method on LSTMs, our method can be used to binarize/ternarize other recurrent architectures such as GRUs. To show the versatility of our method, we repeat the character-level language modeling task performed in Section 5.1 using GRUs on the Penn Treebank, War & Peace and Linux Kernel corpora. We also adopted the same network configurations and settings used in Section 5.1 for each of the aforementioned corpora. TAB4 6 summarizes the performance of our binarized/ternarized models. The simulation show that our method can successfully binarize/ternarize the recurrent weights of GRUs. As a final note, we have investigated the effect of using different batch sizes on the prediction accuracy of our binarized/ternarized models. To this end, we trained an LSTM of size 1000 over a sequence length of 100 and different batch sizes to perform the character-level language modeling task on the Penn Treebank corpus. Batch normalization cannot be used for the batch size of 1 as the output vector will be all zeros. Moreover, using a small batch size leads to a high variance when estimating the statistics of the unnormalized vector, and consequently a lower prediction accuracy than the baseline model without bath normalization, as shown in Figure 3. On the other hand, the prediction accuracy of our binarization/ternarization models improves as the batch size increases, while the prediction accuracy of the baseline model decreases. Figure 3: Effect of different batch sizes on the prediction accuracy of the character-level language modeling task on the Penn Treebank corpus. The introduced binarized/ternarized recurrent models can be exploited by various dataflows such as DaDianNao and TPU . In order to evaluate the effectiveness of LSTMs with recurrent binary/ternary weights, we build our binary/ternary architecture over DaDianNao as a baseline which has proven to be the most efficient dataflow for DNNs with sigmoid/tanh functions. In fact, DaDianNao achieves a speedup of 656× and reduces the energy by 184× over a GPU . Moreover, some hardware techniques can be adopted on top of DaDianNao to further speed up the computations. For instance, showed that ineffectual computations of zero-valued weights can be skipped to improve the run-time performance of DaDianNao. In DaDianNao, a DRAM is used to store all the weights/activations and provide the required memory bandwidth for each multiply-accumulate (MAC) unit. For evaluation purposes, we consider two different application-specific integrated circuit (ASIC) architectures implementing Eq.: low-power implementation and high-speed inference engine. We build these two architectures based on the aforementioned dataflow. For the low-power implementation, we use 100 MAC units. We also use a 12-bit fixed-point representation for both weights and activations of the full-precision model as a baseline architecture. As a , 12-bit multipliers are required to perform the recurrent computations. Note that using the 12-bit fixed-point representation for weights and activations guarantees no prediction accuracy loss in the full-precision models. For the LSTMs with recurrent binary/ternary weights, a 12-bit fixed-point representation is only used for activations and multipliers in the MAC units are replaced with low-cost multiplexers. Similarly, using 12-bit fixed-point representation for activations guarantees no prediction accuracy loss in the introduced binary/ternary models. We implemented our low-power inference engine for both the full-precision and binary/ternary-precision models in TSMC 65-nm CMOS technology. The synthesis excluding the implementation cost of the DRAM are summarized in TAB7. They show that using recurrent binary/ternary weights in up to 9× lower power and 10.6× lower silicon area compared to the baseline when performing the inference computations at 400 MHz. For the high-speed design, we consider the same silicon area and power consumption for both the fullprecision and binary/ternary-precision models. Since the MAC units of the binary/ternary-precision model require less silicon area and power consumption as a of using multiplexers instead of multipliers, we can instantiate up to 10× more MAC units, ing in up to 10× speedup compared to the full-precision model (see TAB7). It is also worth noting that the models using recurrent binary/ternary weights also require up to 12× less memory bandwidth than the full-precision models. More details on the proposed architecture are provided in Appendix D. In this paper, we introduced a method that learns recurrent binary/ternary weights and eliminates most of the full-precision multiplications of the recurrent computations during the inference. We showed that the proposed training method generalizes well over long sequences and across a wide range of temporal tasks such as word/character language modeling and pixel by pixel classification tasks. We also showed that learning recurrent binary/ternary weights brings a major benefit to custom hardware implementations by replacing full-precision multipliers with hardware-friendly multiplexers and reducing the memory bandwidth. For this purpose, we introduced two ASIC implementations: low-power and high-throughput implementations. The former architecture can save up to 9× power consumption and the latter speeds up the recurrent computations by a factor of 10. Figure 4: Probability density of states/gates for the BinaryConnect LSTM compared to its fullprecision counterpart on the Penn Treebank character-level modeling task. Both models were trained for 50 epochs. The vertical axis denotes the time steps. Figure 4 shows the probability density of the gates and hidden states of the BinaryConnect LSTM and its full-precision counterpart both trained with 1000 units and a sequence length of 100 on Penn Treebank corpus BID35 for 50 epochs. The probability density curves show that the gates in the binarized LSTM fail to control the flow of information. More specifically, the input gate i and the output gate o tend to let all information through, the gate g tends to block all information, and the forget gate f cannot decide to let which information through. p and values centered around 1 for the input gate i. In fact, the binarization process changes the probability density of the gates and hidden states during the training phase. Learning recurrent binary/ternary weights are performed in two steps: forward propagation and backward propagation. Forward Propagation: A key point to learn recurrent binary/ternary weights is to batch-normalize the of each vector-matrix multiplication with binary/ternary recurrent weights during the forward propagation. More precisely, we first binarize/ternarize the recurrent weights. Afterwards, the unit activations are computed while using the recurrent binarized/ternarized weights for each time step and recurrent layer. The unit activations are then normalized during the forward propagation. Backward Propagation: During the backward propagation, the gradient with respects to each parameter of each layer is computed. Then, the updates for the parameters are obtained using a learning rule. During the parameter update, we use full-precision weights since the parameter updates are small values. More specifically, the recurrent weights are only binarized/ternarized during the forward propagation. Algorithm 1 summarizes the training method that learns recurrent binary/ternary weights. It is worth noting that batch normalizing the state unit c can optionally be used to better control over its relative contribution in the model. Penn Treebank: Similar to BID37, we split the Penn Treebank corpus into 5017k, 393k and 442k training, validation and test characters, respectively. For this task, we use an LSTM with 1000 units followed by a softmax classifier. The cross entropy loss is minimized on minibatches of size 64 while using ADAM learning rule. We use a learning rate of 0.002. We also use the training sequence length of size 100. FIG5 depicts the probability density of the states/gates of our binarized model trained on the Penn Treebank corpus. While the probability density of our model is different from its full-precision counterpart (see Figure 4), it shows that the gates can control the flow of information. Linux Kernel and Leo Tolstoy's War & Peace: Linux Kernel and Leo Tolstoy's War and Peace corpora consist of 6,206,996 and 3,258,246 characters and have a vocabulary size of 101 and 87, respectively. We split these two datasets similar to BID25. We use one LSTM layer of size 512 followed by a softmax classifier layer. We use an exponentially decaying learning rate initialized with 0.002. ADAM learning rule is also used as the update rule. Text8: This dataset has the vocabulary size of 27 and consists of 100M characters. Following the data preparation approach of BID37, we split the data into training, validation and test sets as 90M, 5M and 5M characters, respectively. For this task, we use one LSTM layer of size 2000 and train it on sequences of length 180 with minibatches of size 128. The learning rate of 0.001 is used and the update rule is determined by ADAM. Penn Treebank: Similar to BID36, we split the Penn Treebank corpus with a 10K size vocabulary, ing in 929K training, 73K validation, and 82K test tokens. We start the training with a learning rate of 20. We then divide it by 4 every time we see an increase in the validation perplexity value. The model is trained with the word sequence length of 35 and the dropout probability of 0.5, 0.65 and 0.65 for the small, medium and large models, respectively. Stochastic gradient descent is also used to train our model while the gradient norm is clipped at 0.25. MNIST: MNIST dataset contains 60000 gray-scale images (50000 for training and 10000 for testing), falling into 10 classes. For this task, we process the pixels in scanline order: each image pixel is processed at each time step similar to BID28. We train our models using an LSTM with 100 nodes, a softmax classifier layer and ADAM step rule with learning rate of 0.001. For this task, we split the data similar to BID15. We adopt Attentive Reader architecture to perform this task. We train the model using bidirectional LSTM with unit size of 256. We also use minibatches of size 128 and ADAM learning rule. We use an exponentially decaying learning rate initialized with 0.003. We implemented our binary/ternary architecture in VHDL and synthesized via Cadence Genus Synthesis Solution using TSMC 65nm GP CMOS technology. Figure 7 shows the latency of the proposed binary/ternary architecture for each time step and temporal task when performing the vector-matrix multiplications on binary/ternary weights. The simulation show that performing the computations on binary and ternary weights can speed up the computations by factors of 10× and 5× compared to the full-precision models. Figure 7: Latency of the proposed accelerator over full-precision, binary and ternary models.
We propose high-performance LSTMs with binary/ternary weights, that can greatly reduce implementation complexity
996
scitldr
This paper addresses unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with labeled support images for few-shot recognition in testing. We use a new GAN-like deep architecture aimed at unsupervised learning of an image representation which will encode latent object parts and thus generalize well to unseen classes in our few-shot recognition task. Our unsupervised training integrates adversarial, self-supervision, and deep metric learning. We make two contributions. First, we extend the vanilla GAN with reconstruction loss to enforce the discriminator capture the most relevant characteristics of "fake" images generated from randomly sampled codes. Second, we compile a training set of triplet image examples for estimating the triplet loss in metric learning by using an image masking procedure suitably designed to identify latent object parts. Hence, metric learning ensures that the deep representation of images showing similar object classes which share some parts are closer than the representations of images which do not have common parts. Our show that we significantly outperform the state of the art, as well as get similar performance to the common episodic training for fully-supervised few-shot learning on the Mini-Imagenet and Tiered-Imagenet datasets. This paper presents a new deep architecture for unsupervised few-shot object recognition. In training, we are given a set of unlabeled images. In testing, we are given a small number K of support images with labels sampled from N object classes that do not appear in the training set (also referred to as unseen classes). Our goal in testing is to predict the label of a query image as one of these N previously unseen classes. A common approach to this N -way K-shot recognition problem is to take the label of the closest support to the query. Thus, our key challenge is to learn a deep image representation on unlabeled data such that it would in testing generalize well to unseen classes, so as to enable accurate distance estimation between the query and support images. Our unsupervised few-shot recognition problem is different from the standard few-shot learning , as the latter requires labeled training images (e.g., for episodic training ). Also, our problem is different from the standard semi-supervised learning , where both unlabeled and labeled data are typically allowed to share either all or a subset of classes. When classes of unlabeled and labeled data are different in semi-supervised learning , the labeled dataset is typically large enough to allow transfer learning of knowledge from unlabeled to labeled data, which is not the case in our few-shot setting. There is scant work on unsupervised few-shot recognition. The state of the art first applies unsupervised clustering for learning pseudo labels of unlabeled training images, and then uses the standard few-shot learning on these pseudo labels for episodic traininge.g., Prototypical Network or MAML . However, performance of this method is significantly below that of counterpart approaches to supervised few-shot learning. Our approach is aimed at learning an image representation from unlabeled data that captures presence or absence of latent object parts. We expect that such a representation would generalize well We use a GAN-like deep architecture to learn an image encoding z on unlabeled training data that will be suitable for few-shot recognition in testing. Our unsupervised training integrates adversarial, self-supervision, and metric learning. The figure illustrates our first contribution that extends the vanilla GAN (the red dashed line) with regularization so the encodingẑ of a "fake" image is similar to the randomly sampled code z which has been used for generating the "fake" image. The self-supervision task is to predict the rotation angle of rotated real training images. Deep metric learning is illustrated in greater detail in Fig. 3. to unseen classes in our few-shot recognition task. This is because of the common assumption in computer vision that various distinct object classes share certain parts. Thus, while our labeled and unlabeled images do not show the same object classes, there may be some parts that appear in both training and test image sets. Therefore, an image representation that would capture presence of these common parts in unlabeled images is expected to also be suitable for representing unseen classes, and thus facilitate our N -way K-shot recognition. Toward learning such an image representation, in our unsupervised training, we integrate adversarial, self-supervision, and deep metric learning. As shown in Fig. 1, we use a GAN-like architecture for training a discriminator network D to encode real images d, which will be later used for few-shot recognition in testing. We also consider a discrete encoding z = D z (x) ∈ {−1, 1} d, and empirically discover that it gives better performance than the continuous counterpart. Hence our interpretation that binary values in the discrete z indicate presence or absence of d latent parts in images. In addition to D z, the discriminator has two other outputs (i.e., heads), D r/f and D rot, for adversarial and self-supervised learning, respectively as illustrated in Fig. 2. D is adversarially trained to distinguish between real and "fake" images, where the latter x are produced by a generator network G, x = G(z), from image encodings z which are randomly sampled from the uniform distribution Sampling from the uniform distribution is justified, because latent parts shared among a variety of object classes appearing in the unlabeled training set are likely to be uniformly distributed across the training set. We extend the vanilla GAN with regularization aimed at minimizing a reconstruction loss between the sampled z and the corresponding embeddingẑ = D(G(z)). As our experiments demonstrate, this reconstruction loss plays an important role in training both D and G in combination with the adversarial loss, as both losses enforce G generate as realistic images as possible and D capture the most relevant image characteristics for reconstruction and real/fake recognition. Furthermore, following recent advances in self-supervised learning (; ; ; ;), we also augment our training set with rotated versions of the real images around their center, and train D to predict their rotation angles,α = D rot (Rotate(x, α)) ∈ {0, 1, 2, 3} * 90 •. As in other approaches that use self-supervised learning, our demonstrate that this data augmentation strengthens our unsupervised training and improves few-shot recognition. Finally, we use deep metric learning toward making the image encoding z = D z (x) represent latent parts and in this way better capture similarity of object classes for our few-shot recognition. We expect that various object classes share parts, and that more similar classes have more common parts. Therefore, the encodings of images showing similar (or different) object classes should have a small (or large) distance. To ensure this property, we use metric learning and compile a new training set of triplet images for estimating the standard triple loss, as illustrated in Fig. 3. Since classes in our training set are not annotated, we form the triplet training examples by using an image masking procedure which is particularly suitable for identifying latent object parts. In the triplet, the anchor is the original (unmasked) image, the positive is an image obtained from the original by masking rectangular patches at the image periphery (e.g., top corner), and the negative is an image obtained from the original by masking centrally located image patches. By design, the negative image masks an important object part, and thus the deep representations of the anchor and the negative should have a large distance. Conversely, masking peripheral corners in the positive image does not cover any important parts of the object, and thus the deep representation of the positive should be very close to that of the anchor. In this way, our metric learning on the triplet training examples ensures that the learned image representation z accounts for similarity of object classes in terms of their shared latent parts. As our show, this component of our unsupervised training further improves few-shot recognition in testing, to the extent that not only do we significantly outperform the state of the art but also get a performance that is on par with the common episodic training for fullysupervised few-shot learning on the Mini-Imagenet and Tiered-Imagenet datasets. Our contributions are twofold: • Extending the vanilla GAN with a reconstruction loss between uniformly sampled codes,, and embeddings of the corresponding "fake" images,ẑ = D(G(z)). • The masking procedure for compiling triplet image examples and deep metric learning of z so it accounts for image similarity in terms of shared latent parts. The rest of this paper is organized as follows. Sec. 2 reviews previous work, Sec. 3 specifies our proposed approach, Sec. 4 presents our implementation details and our experimental , and finally, Sec. 5 gives our concluding remarks. This section reviews the related work on few-shot learning including standard, semi-supervised and unsupervised few-shot learning. Few-shot learning is a type of transfer learning, where the goal is to transfer knowledge learned from a training set to the test set such that the model can recognize new classes from a few examples . Approaches to supervised few-shot learning can be broadly divided into three main groups based on metric learning, meta-learning, and hallucination. Metric-learning based approaches seek to learn embeddings such that they are close for same-class examples and far away for others. Representative methods include Matching networks , Prototypical networks and Relation networks . Meta-learning based approaches learn a meta-learner for learning a task-adaptive learner such that the latter performs well on new classes by parameter finetuning. Representative methods include MAML , Reptile , and many others (; b;). Finally, hallucination based few-shot learning first identifies rules for data augmentation from the training set. These rules are then used in testing to generate additional labeled examples for few-shot recognition. Representative methods include Imaginary network , f-VEAGAN-D2 and Delta-encoder . Semi-supervised few-shot learning was introduced in , and further studied in (a). These approaches augment the labeled training set with unlabeled images. introduced the unsupervised few-shot learning problem, where the entire training set is unlabeled. They first create pseudo labels from unsupervised training, then apply the standard supervised few-shot learning on these pseudo labels of training examples. use clustering to identify pseudo labels, and treat each training example as belonging to a unique class. We differ from the above closely related approaches in two ways. First, we do not use the common episodic training for few-shot learning. Second, we ensure that our image representation respects distance relationships between dissimilar images when their important parts are masked. Our training set consists of unlabeled examples x u with hidden classes y u ∈ L train. In testing, we are given support images x s with labels y s ∈ L test sampled from N = |L test | unseen classes, L train ∩L test = ∅, where each unseen class has K examples. Our N -way K-shot task is to classify query images x q into one of these N classes, y q ∈ L test. For this, we first compute deep image representations z q = D z (x q) and z s = D z (x s) of the query and support images using the discriminator of the deep architecture shown in Fig. 1. Then, for every unseen class n = 1,..., N, we compute the prototype vector c n as the mean of the K image encodings z s = D z (x s) of class n: Finally, we take the label of the closest c n to z q as our solution: where ∆ denotes a distance function, specified in Sec. 3.4. The same formulation of few-shot recognition is used in . Our deep architecture consists of a generator G and a discriminator D networks, which are learned by integrating adversarial, self-supervision and metric learning. To this end, we equip D with three output heads: image encoding head D z, rotation prediction head D rot for self-supervision, and the standard discriminating head D r/f for distinguishing between real and "fake" images in adversarial training, as depicted in Fig. 3. We specify the adversarial loss functions for training D and G as L where E denotes the expected value, p data (x) is a distribution of the unlabeled training images, and p(z) is a distribution of latent codes which are sampled for generating "fake" images. In our experiments, we have studied several specifications for p(z) aimed at modeling occurrences of latent parts across images, including the binomial distribution Bin(0.5), the Gaussian distribution N, and the uniform distribution U[−1, 1]. For all these specifications, we get similar performance. As shown in , optimizing the objectives in equation 3 and equation 4 is equivalent to minimizing the reverse KL divergence. For self-supervision, we rotate real images of the unlabeled training set around their center, and train D to predict the rotation angle α using the following cross-entropy loss: wherex α is the rotated version of x with angle α ∈ {0, 1, 2, 3} * 90 •. We are aware that there are many other ways to incorporate self-supervision (e.g., "jigsaw solver" ). We choose image rotation for its simplicity and ease of implementation, as well as state-of-the-art performance reported in the literature. We extend the vanilla GAN by making D reconstruct the probabilistically sampled latent code z ∼ p(z), which is passed to G to generate synthetic images. Thus, we use z as a "free" label for additionally training of D and G along with the adversarial and self-supervision learning. The reconstruction loss is specified as the binary cross-entropy loss where z is converted to range d for computing loss, d is the length of z, z m is the mth element of the latent code, z m is the predicted mth value of the discriminator's encoding head D z, and σ(·) is the sigmoid function. We additionally train D to output image representations that respect distance relationships such that the more latent parts are shared between images, the closer their representations. To this end, we compile a training set of triplets anchor, positive, negative. The anchor z = D z (x) represents an original image x from the unlabeled training set. The positives {z, obtained by masking one of the four corners of the anchor image: top-left, top-right, bottom-left, and bottom-right. The masking patch is selected to be relatively small and thus ensure that no to little foreground is masked in the positives. The negatives {z where ρ is a distance margin, and ∆ is the following distance function: 3.5 OUR UNSUPERVISED TRAINING Alg. 1 summarizes our unsupervised training that integrates adversarial, self-supervision and deep metric learning. For easier training of D and G, we divide learning in two stages. First, we perform the adversarial and self-supervision training by following the standard GAN training, where for each image sampled from the training set, t 1 = 1, . . ., T 1, G is optimized once and D is optimized multiple times over t 2 = 1, . . ., T 2 iterations (T 2 = 3). After convergence of the first training stage (T 1 = 50, 000), the ing discriminator is saved and denoted as D. In the second training stage, we continue with metric learning of D over the triplet image examples in t 3 = 1,..., T 3 iterations (T 3 = 20, 000), while simultaneously regularizing that the discriminator updates do not significantly deviate from the previously learned D. Algorithm 1: Our unsupervised training consists of two stages. T 1 is the number of training iterations of the first stage aimed at adversarial and self-supervision learning; T 2 is the number of updates of D per one update of G in the first training stage; T 3 is the number of training iterations in the second stage aimed at metric learning. β, γ, δ, λ are non-negative hyper parameters. Randomly sample a real training image x ∼ p data (x) and take it as anchor; Generate the positive and negative images by appropriately masking the anchor; Form the corresponding triplet examples; as the learned discriminator D. We evaluate our approach on the two common few-shot learning datasets: Mini-Imagenet and Tiered-Imagenet . MiniImagenet contains 100 randomly chosen classes from ILSVRC-2012 . We split these 100 classes into 64, 16 and 20 classes for meta-training, meta-validation, and metatesting respectively. Each class contains 600 images of size 84 × 84. Tiered-Imagenet is a larger subset of ILSVRC-2012 , consists of 608 classes grouped into 34 high-level categories. These are divided into 20, 6 and 8 categories for meta-training, meta-validation, for meta-testing. This corresponds to 351, 97 and 160 classes for meta-training, meta-validation, and meta-testing respectively. This dataset aims to minimize the semantic similarity between the splits as in Mini-Imagenet. All images are also of size 84 × 84. We are the first to report of unsupervised few-shot recognition on the Tiered-Imagenet dataset. For our unsupervised few-shot problem, we ignore all ground-truth labeling information in the training and validation sets, and only use ground-truth labels of the test set for evaluation. We also resize all images to size 64 × 64 in order to match the required input size of the GAN. For hyper-parameter tuning, we use the validation loss of corresponding ablations. Evaluation metrics: We first randomly sample N classes from the test classes and K examples for each sampled class, and then classify query images into these N classes. We report the average accuracy over 1000 episodes with 95% confidence intervals of the N -way K-shot classification. Implementation details: We implement our approach, and conduct all experiments in Pytorch . The backbone GAN that we use is the Spectral Norm GAN (SN-GAN) combined with the self-modulated batch normalization . The number of blocks of layers in both G and D is 4. The dimension of the latent code/representation z is d = 128. We use an Adam optimizer with the constant learning rate of 5e −4. D is updated in T 2 = 3 iterations for every update of G. In the first and the second training stages, the mini-batch size is 128 and 32, respectively. The latter is smaller, since we have to enumerate 16 masked images at 16 locations of a 4×4 grid for each original training image. That is, our image masking for generating positive and negative images is performed by placing a 16 × 16 patch centered at the 4 × 4 grid locations in the original image, where the patch brightness is equal to the average of image pixels. We empirically observe convergence of the first and second training stages of our full approach after T 1 = 50000 and T 3 = 20000 iterations, respectively. In all experiments, we set γ = 1, β = 1, δ = 1, λ = 0.2, ρ = 0.5 as they are empirically found to give the best performance. It is worth noting that beyond generating data for self-supervision and metric learning, we do not employ the recently popular data-augmentation techniques in training (e.g., image jittering, random crop, etc.). We define the following simpler variants of our approach for testing how its individual components affect performance. The variants include: • GAN: the Spectral Norm GAN (SN-GAN) with self-modulated batch normalization , as shown in Fig. 1 within the red dashed line. • GAN + BCE: extends training of the GAN with the reconstruction loss. • GAN + BCE + ROT: extends training of the GAN + BCE with the rotation prediction loss. • GAN + BCE + ROT + METRIC: Our full model that extends the GAN + BCE + ROT with the triplet loss. Ablation study and comparison with the state of the art: Table. 1 presents of our ablations and a comparison with the state-of-the-art methods on Mini-Imagenet and Tiered-Imagenet, in 1-shot and 5-shot testing. For fair comparison, we follow the standard algorithm for assigning labels to query images in the 1-shot and 5-shot testing, as used in . From Table. 1, our reconstruction loss plays a crucial role, since it improves performance of GAN + BCE by nearly 9% relative to that of GAN. Importantly, our ablation GAN + BCE already outperforms all related work by a large margin. This suggests that using a simple reconstruction loss improves training of the vanilla GAN. Adding the rotation loss further improves performance of GAN + BCE + ROT by 1%. Finally, the proposed triplet loss in GAN + BCE + ROT + METRIC gives an additional performance gain of 3%, and the state-of-the-art . Interestingly, in oneshot setting, our full approach GAN + BCE + ROT + METRIC also outperforms the recent fully supervised approach of ProtoNets trained on the labeled training set. Qualitative Results: Fig. 4 illustrates our masking procedure for generating negative images in the triplets for metric learning. In each row, the images are organized from left to right by their estimated distance to the original (unmasked) image in the descending order, where the rightmost image is the closest. From Fig. 4, our metric learning ensures that the image representation captures important object parts, so when such parts are missing in the masked images their distances to the original image are greater than distances of other masked images missing less-important parts. We have addressed unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with test images. A new GAN-like deep architecture has been 25.56 ± 1.08 31.10 ± 0.63 --AAL-ProtoNets 37.67 ± 0.39 40.29 ± 0.68 --UMTRA + AutoAugment 39.93 50.73 --DeepCluster CACTUs -ProtoNets 39. 46.56 ± 0.76 62.29 ± 0.71 46.52 ± 0.72 66.15 ± 0.74 Figure 4: Our image masking with rectangular patches for Mini-Imagenet. In every row, the images are organized from left to right in the descending order by their estimated distance to the original (unmasked) image. proposed for unsupervised learning of an image representation which respects image similarity in terms of shared latent object parts. We have made two contributions by extending the vanilla GAN with reconstruction loss and by integrating deep metric learning with the standard adversarial and self-supervision learning. Our demonstrate that our approach generalizes will to unseen classes, outperforming the sate of the art by more than 8% in both 1-shot and 5-shot recognition tasks on the benchmark Mini-Imagenet dataset. We have reported the first of unsupervised few-shot recognition on the Tiered-Imagenet dataset. Our ablations have evaluated that solely our first contribution leads to superior performance relative to that of closely related approaches, and that the addition of the second contribution further improves our 1-shot and 5-shot recognition by 3%. We also outperform a recent fully-supervised approach to few-shot learning that uses the common episodic training on the same datasets.
We address the problem of unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with test images.
997
scitldr
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-the-art defenses. We show that the Gradient Estimation attacks are very effective even against these defenses. The ubiquity of machine learning provides adversaries with both opportunities and incentives to develop strategic approaches to fool learning systems and achieve their malicious goals. Many attack strategies devised so far to generate adversarial examples to fool learning systems have been in the white-box setting, where adversaries are assumed to have access to the learning model BID18; BID0; BID1; BID6 ). However, in many realistic settings, adversaries may only have black-box access to the model, i.e. they have no knowledge about the details of the learning system such as its parameters, but they may have query access to the model's predictions on input samples, including class probabilities. For example, we find this to be the case in some popular commercial AI offerings, such as those from IBM, Google and Clarifai. With access to query outputs such as class probabilities, the training loss of the target model can be found, but without access to the entire model, the adversary cannot access the gradients required to carry out white-box attacks. Most existing black-box attacks on DNNs have focused on transferability based attacks BID12; BID7; BID13 ), where adversarial examples crafted for a local surrogate model can be used to attack the target model to which the adversary has no direct access. The exploration of other black-box attack strategies is thus somewhat lacking so far in the literature. In this paper, we design powerful new black-box attacks using limited query access to learning systems which achieve adversarial success rates close to that of white-box attacks. These black-box attacks help us understand the extent of the threat posed to deployed systems by adversarial samples. The code to reproduce our can be found at https://github.com/ anonymous 1.New black-box attacks. We propose novel Gradient Estimation attacks on DNNs, where the adversary is only assumed to have query access to the target model. These attacks do not need any access to a representative dataset or any knowledge of the target model architecture. In the Gradient Estimation attacks, the adversary adds perturbations proportional to the estimated gradient, instead of the true gradient as in white-box attacks BID0; ). Since the direct Gradient Estimation attack requires a number of queries on the order of the dimension of the input, we explore strategies for reducing the number of queries to the target model. We also experimented with Simultaneous Perturbation Stochastic Approximation (SPSA) and Particle Swarm Optimization (PSO) as alternative methods to carry out query-based black-box attacks but found Gradient Estimation to work the best. Query-reduction strategies We propose two strategies: random feature grouping and principal component analysis (PCA) based query reduction. In our experiments with the Gradient Estimation attacks on state-of-the-art models on MNIST (784 dimensions) and CIFAR-10 (3072 dimensions) datasets, we find that they match white-box attack performance, achieving attack success rates up to 90% for single-step attacks in the untargeted case and up to 100% for iterative attacks in both targeted and untargeted cases. We achieve this performance with just 200 to 800 queries per sample for single-step attacks and around 8,000 queries for iterative attacks. This is much fewer than the closest related attack by. While they achieve similar success rates as our attack, the running time of their attack is up to 160× longer for each adversarial sample (see Appendix I.6).A further advantage of the Gradient Estimation attack is that it does not require the adversary to train a local model, which could be an expensive and complex process for real-world datasets, in addition to the fact that training such a local model may require even more queries based on the training data. Attacking real-world systems. To demonstrate the effectiveness of our Gradient Estimation attacks in the real world, we also carry out a practical black-box attack using these methods against the Not Safe For Work (NSFW) classification and Content Moderation models developed by Clarifai, which we choose due to their socially relevant application. These models have begun to be deployed for real-world moderation BID4, which makes such black-box attacks especially pernicious. We carry out these attacks with no knowledge of the training set. We have demonstrated successful attacks (FIG0) with just around 200 queries per image, taking around a minute per image. In FIG0, the target model classifies the adversarial image as'safe' with high confidence, in spite of the content that had to be moderated still being clearly visible. We note here that due to the nature of the images we experiment with, we only show one example here, as the others may be offensive to readers. The full set of images is hosted anonymously at https://www.dropbox.com/s/ xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0.Comparative evaluation of black-box attacks. We carry out a thorough empirical comparison of various black-box attacks (given in TAB8) on both MNIST and CIFAR-10 datasets. We study attacks that require zero queries to the learning model, including the addition of perturbations that are either random or proportional to the difference of means of the original and targeted classes, as well as various transferability based black-box attacks. We show that the proposed Gradient Estimation attacks outperform other black-box attacks in terms of attack success rate and achieve comparable with white-box attacks. In addition, we also evaluate the effectiveness of these attacks on DNNs made more robust using adversarial training BID0 BID18 and its recent variants including ensemble adversarial training BID21 and iterative adversarial training BID9. We find that although standard and ensemble adversarial training confer some robustness against single-step attacks, they are vulnerable to iterative Gradient Estimation attacks, with adversar-ial success rates in excess of 70% for both targeted and untargeted attacks. We find that our methods outperform other black-box attacks and achieve performance comparable to white-box attacks. Related Work. Existing black-box attacks that do not use a local model were first proposed for convex inducing two-class classifiers by BID11. For malware data, use genetic algorithms to craft adversarial samples, while use hill climbing algorithms. These methods are prohibitively expensive for non-categorical and high-dimensional data such as images. BID13 proposed using queries to a target model to train a local surrogate model, which was then used to to generate adversarial samples. This attack relies on transferability. To the best of our knowledge, the only previous literature on query-based black-box attacks in the deep learning setting is independent work by BID10 and. BID10 propose a greedy local search to generate adversarial samples by perturbing randomly chosen pixels and using those which have a large impact on the output probabilities. Their method uses 500 queries per iteration, and the greedy local search is run for around 150 iterations for each image, ing in a total of 75,000 queries per image, which is much higher than any of our attacks. Further, we find that our methods achieve higher targeted and untargeted attack success rates on both MNIST and CIFAR-10 as compared to their method. propose a black-box attack method named ZOO, which also uses the method of finite differences to estimate the derivative of a function. However, while we propose attacks that compute an adversarial perturbation, approximating FGSM and iterative FGS; ZOO approximates the Adam optimizer, while trying to perform coordinate descent on the loss function proposed by BID1. Neither of these works demonstrates the effectiveness of their attacks on real-world systems or on state-of-the-art defenses. In this section, we will first introduce the notation we use throughout the paper and then describe the evaluation setup and metrics used in the remainder of the paper. A classifier f (·; θ): X → Y is a function mapping from the domain X to the set of classification outputs Y. (Y = {0, 1} in the case of binary classification, i.e. Y is the set of class labels.) The number of possible classification outputs is then |Y|. θ is the set of parameters associated with a classifier. Throughout, the target classifier is denoted as f (·; θ), but the dependence on θ is dropped if it is clear from the context. H denotes the constraint set which an adversarial sample must satisfy. f (x, y) is used to represent the loss function for the classifier f with respect to inputs x ∈ X and their true labels y ∈ Y.Since the black-box attacks we analyze focus on neural networks in particular, we also define some notation specifically for neural networks. The outputs of the penultimate layer of a neural network f, representing the output of the network computed sequentially over all preceding layers, are known as the logits. We represent the logits as a vector φ f (x) ∈ R |Y|. The final layer of a neural network f used for classification is usually a softmax layer represented as a vector of probabilities DISPLAYFORM0. The empirical evaluation carried out in Section 3 is on state-of-the-art neural networks on the MNIST and CIFAR-10 datasets. The details of the datasets are given in Appendix C.1, and the architecture and training details for all models are given in Appendix C.2. Only for untargeted attacks are given in the main body of the paper. All for targeted attacks are contained in Appendix E. We use two different loss functions in our evaluation, the standard cross-entropy loss (abbreviated as xent) and the logit-based loss (ref. Section 3.1.2, abbreviated as logit). In all of these attacks, the adversary's perturbation is constrained using the L ∞ distance. The details of baseline black-box attacks and can be found in Appendix A.1.1. Similarly, detailed descriptions and for transferability-based attacks are in Appendix A.2. The full set of attacks that was evaluated is given in TAB8 in Appendix G, which also provides a taxonomy for black-box attacks. MNIST. Each pixel of the MNIST image data is scaled to. We trained four different models on the MNIST dataset, denoted Models A to D, which are used by BID21 and represent a good variety of architectures. For the attacks constrained with the L ∞ distance, we vary the adversary's perturbation budget from 0 to 0.4, since at a perturbation budget of 0.5, any image can be made solid gray. CIFAR-10. Each pixel of the CIFAR-10 image data is in. We choose three model architectures for this dataset, which we denote as Resnet-32, Resnet-28-10 (ResNet variants ), and Std.-CNN (a standard CNN 2 from Tensorflow BID0). For the attacks constrained with the L ∞ distance, we vary the adversary's perturbation budget from 0 to 28. Throughout the paper, we use standard metrics to characterize the effectiveness of various attack strategies. For MNIST, all metrics for single-step attacks are computed with respect to the test set consisting of 10,000 samples, while metrics for iterative attacks are computed with respect to the first 1,000 samples from the test set. For the CIFAR-10 data, we choose 1,000 random samples from the test set for single-step attacks and a 100 random samples for iterative attacks. In our evaluations of targeted attacks, we choose target T for each sample uniformly at random from the set of classification outputs, except the true class y of that sample. Attack success rate. The main metric, the attack success rate, is the fraction of samples that meets the adversary's goal: f (x adv) = y for untargeted attacks and f (x adv) = T for targeted attacks with target T BID18 BID21. Alternative evaluation metrics are discussed in Appendix C.3.Average distortion. We also evaluate the average distortion for adversarial examples using average L 2 distance between the benign samples and the adversarial ones as suggested by DISPLAYFORM0 where N is the number of samples. This metric allows us to compare the average distortion for attacks which achieve similar attack success rates, and therefore infer which one is stealthier. Number of queries. Query based black-box attacks make queries to the target model, and this metric may affect the cost of mounting the attack. This is an important consideration when attacking real-world systems which have costs associated with the number of queries made. Deployed learning systems often provide feedback for input samples provided by the user. Given query feedback, different adaptive, query-based algorithms can be applied by adversaries to understand the system and iteratively generate effective adversarial examples to attack it. Formal definitions of query-based attacks are in Appendix D. We initially explored a number of methods of using query feedback to carry out black-box attacks including Particle Swarm Optimization and Simultaneous Perturbation Stochastic Approximation BID16. However, these methods were not effective at finding adversarial examples for reasons detailed in Section 3.4, which also contains the obtained. Given the fact that many white-box attacks for generating adversarial examples are based on gradient information, we then tried directly estimating the gradient to carry out black-box attacks, and found it to be very effective in a range of conditions. In other words, the adversary can approximate white-box Single-step and Iterative FGSM attacks BID0 ) using estimates of the losses that are needed to carry out those attacks. We first propose a Gradient Estimation black-box attack based on the method of finite differences BID17. The drawback of a naive implementation of the finite difference method, however, is that it requires O(d) queries per input, where d is the dimension of the input. This leads us to explore methods such as random grouping of features and feature combination using components obtained from Principal Component Analysis (PCA) to reduce the number of queries. Threat model and justification. We assume that the adversary can obtain the vector of output probabilities for any input x. The set of queries the adversary can make is then Q f = {p f (x), ∀x}. Note that an adversary with access to the softmax probabilities will be able to recover the logits up to an additive constant, by taking the logarithm of the softmax probabilities. For untargeted attacks, the adversary only needs access to the output probabilities for the two most likely classes. A compelling reason for assuming this threat model for the adversary is that many existing cloudbased ML services allow users to query trained models (Watson Visual Recognition, Clarifai, Google Vision API). The of these queries are confidence scores which can be used to carry out Gradient Estimation attacks. These trained models are often deployed by the clients of these ML as a service (MLaaS) providers BID4 ). Thus, an adversary can pose as a user for a MLaaS provider and create adversarial examples using our attack, which can then be used against any client of that provider. In this section, we focus on the method of finite differences to carry out Gradient Estimation based attacks. All the analysis and are presented for untargeted attacks, but can be easily extended to targeted attacks (Appendix E). Let the function whose gradient is being estimated be g(x). The input to the function is a d-dimensional vector x, whose elements are represented as x i, where DISPLAYFORM0 The canonical basis vectors are represented as e i, where e i is 1 only in the i th component and 0 everywhere else. Then, a two-sided estimation of the gradient of g with respect to x is given by DISPLAYFORM1... DISPLAYFORM2 δ is a free parameter that controls the accuracy of the estimation. A one-sided approximation can also be used, but will be less accurate . If the gradient of the function g exists, then lim δ→0 FD x (g(x), δ) = ∇ x g(x). The finite difference method is useful for a black-box adversary aiming to approximate a gradient based attack, since the gradient can be directly estimated with access to only the function values. In the untargeted FGS method, the gradient is usually taken with respect to the cross-entropy loss between the true label of the input and the softmax probability vector. The cross-entropy loss of a network f at an input DISPLAYFORM0, where y is the index of the original class of the input. The gradient of f (x, y) is DISPLAYFORM1 An adversary with query access to the softmax probabilities then just has to estimate the gradient of p f y (x) and plug it into Eq. 2 to get the estimated gradient of the loss. The adversarial sample thus generated is DISPLAYFORM2 This method of generating adversarial samples is denoted as FD-xent. We also use a loss function based on logits which was found to work well for white-box attacks by BID1. The loss function is given by DISPLAYFORM0 where y represents the ground truth label for the benign sample x and φ(·) are the logits. κ is a confidence parameter that can be adjusted to control the strength of the adversarial perturbation. If the confidence parameter κ is set to 0, the logit loss is max(φ(x + δ) y − max{φ(x + δ) i: i = y}, 0). For an input that is correctly classified, the first term is always greater than 0, and for an incorrectly classified input, an untargeted attack is not meaningful to carry out. Thus, the loss term reduces to φ(x + δ) y − max{φ(x + δ) i: i = y} for relevant inputs. An adversary can compute the logit values up to an additive constant by taking the logarithm of the softmax probabilities, which are assumed to be available in this threat model. Since the loss function is equal to the difference of logits, the additive constant is canceled out. Then, the finite differences method can be used to estimate the difference between the logit values for the original class y, and the second most likely class y, i.e., the one given by y = argmax i =y φ(x) i. The untargeted adversarial sample generated for this loss in the white-box case is DISPLAYFORM1 Similarly, in the case of a black-box adversary with query-access to the softmax probabilities, the adversarial sample is DISPLAYFORM2 This attack is denoted as FD-logit. Table 1: Untargeted black-box attacks: Each entry has the attack success rate for the attack method given in that column on the model in each row. The number in parentheses for each entry is ∆(X, X adv), the average distortion over all samples used in the attack. In each row, the entry in bold represents the black-box attack with the best performance on that model. Gradient Estimation using Finite Differences is our method, which has performance matching white-box attacks. Above: The iterative variant of the gradient based attack described in Section A.1.2 is a powerful attack that often achieves much higher attack success rates in the white-box setting than the simple single-step gradient based attacks. Thus, it stands to reason that a version of the iterative attack with estimated gradients will also perform better than the single-step attacks described until now. An iterative attack with t + 1 iterations using the cross-entropy loss is: DISPLAYFORM0 where α is the step size and H is the constraint set for the adversarial sample. This attack is denoted as IFD-xent. If the logit loss is used instead, it is denoted as IFD-logit. In this section, we summarize the obtained using Gradient Estimation attacks with Finite Differences and describe the parameter choices made. The y-axis for both figures gives the variation in adversarial success as is increased. The most successful black-box attack strategy in both cases is the Gradient Estimation attack using Finite Differences with the logit loss (FD-logit), which coincides almost exactly with the white-box FGS attack with the logit loss (WB FGS-logit). Also, the Gradient Estimation attack with query reduction using PCA (GE-QR (PCA-k, logit)) performs well for both datasets as well. FD-logit and IFD-logit match white-box attack adversarial success rates: The Gradient Estimation attack with Finite Differences (FD-logit) is the most successful untargeted single-step black-box attack for MNIST and CIFAR-10 models. It significantly outperforms transferability-based attacks (Table 1) and closely tracks white-box FGS with a logit loss (WB FGS-logit) on MNIST and CIFAR-10 (FIG2). For adversarial samples generated iteratively, the Iterative Gradient Estimation attack with Finite Differences (IFD-logit) achieves 100% adversarial success rate across all models on both datasets (Table 1). We used 0.3 for the value of for the MNIST dataset and 8 for the CIFAR-10 dataset. The average distortion for both FD-logit and IFD-logit closely matches their white-box counterparts, FGS-logit and IFGS-logit as given in Table 8.FD-T and IFD-T achieve the highest adversarial success rates in the targeted setting: For targeted black-box attacks, IFD-xent-T achieves 100% adversarial success rates on almost all models as shown by the in Table 6. While FD-xent-T only achieves about 30% adversarial success rates, this matches the performance of single-step white-box attacks such as FGS-xent-T and FGS-logit-T (TAB11). The average distortion for samples generated using gradient estimation methods is similar with that of white-box attacks. Parameter choices: We use δ = 1.0 for FD-xent and IFD-xent for both datasets, while using δ = 0.01 for FD-logit and IFD-logit. We find that a larger value of δ is needed for xent loss based attacks to work. The reason for this is that the probability values used in the xent loss are not as sensitive to changes as in the logit loss, and thus the gradient cannot be estimated since the function value does not change at all when a single pixel is perturbed. For the Iterative Gradient Estimation attacks using Finite Differences, we use α = 0.01 and t = 40 for the MNIST and α = 1.0 and t = 10 for CIFAR-10 throughout. The same parameters are used for the white-box Iterative FGS attack given in Appendix I.1. This translates to 62720 queries for MNIST (40 steps of iteration) and 61440 queries (10 steps of iteration) for CIFAR-10 per sample. We find these choices work well, and keep the running time of the Gradient Estimation attacks at a manageable level. However, we find that we can achieve similar adversarial success rates with much fewer queries using query reduction methods which we describe in the next section. The major drawback of the approximation based black-box attacks is that the number of queries needed per adversarial sample is large. For an input with dimension d, the number of queries will be exactly 2d for a two-sided approximation. This may be too large when the input is high-dimensional. So we examine two techniques in order to reduce the number of queries the adversary has to make. Both techniques involve estimating the gradient for groups of features, instead of estimating it one feature at a time. The justification for the use of feature grouping comes from the relation between gradients and directional derivatives for differentiable functions. The directional derivative of a function g is defined as DISPLAYFORM0. It is a generalization of a partial derivative. For differentiable functions, ∇ v g(x) = ∇ x g(x) · v, which implies that the directional derivative is just the projection of the gradient along the direction v. Thus, estimating the gradient by grouping features is equivalent to estimating an approximation of the gradient constructed by projecting it along appropriately chosen directions. The estimated gradient∇ x g(x) of any function g can be computed using the techniques below, and then plugged in to Equations 3 and 5 instead of the finite difference term to create an adversarial sample. Next, we introduce the techniques applied to group the features for estimation. Detailed algorithms for these techniques are given in Appendix F. The simplest way to group features is to choose, without replacement, a random set of features. The gradient can then be simultaneously estimated for all these features. If the size of the set chosen is k, then the number of queries the adversary has to make is d k. When k = 1, this reduces to the case where the partial derivative with respect to every feature is found, as in Section 3.1. In each iteration of Algorithm 1, there is a set of indices S according to which v is determined, with v i = 1 if and only if i ∈ S. Thus, the directional derivative being estimated is i∈S ∂g(x) ∂xi, which is an average of partial derivatives. Thus, the quantity being estimated is not the gradient itself, but an index-wise averaged version of it. A more principled way to reduce the number of queries the adversary has to make to estimate the gradient is to compute directional derivatives along the principal components as determined by principal component analysis (PCA) BID15, which requires the adversary to have access to a set of data which is represetative of the training data. A more detailed description of PCA and the Gradient Estimation attack using PCA components for query reduction is given in Appendix F.2. In Algorithm 2, U is the d × d matrix whose columns are the principal components u i, where DISPLAYFORM0 The quantity being estimated in Algorithm 2 in the Appendix is an approximation of the gradient in the PCA basis: DISPLAYFORM1 where the term on the left represents an approximation of the true gradient by the sum of its projection along the top k principal components. In Algorithm 2, the weights of the representation in the PCA basis are approximated using the approximate directional derivatives along the principal components. Performing an iterative attack with the gradient estimated using the finite difference method (Equation 1) could be expensive for an adversary, needing 2td queries to the target model, for t iterations with the two-sided finite difference estimation of the gradient. To lower the number of queries needed, the adversary can use either of the query reduction techniques described above to reduce the number of queries to 2tk (k < d). These attacks using the cross-entropy loss are denoted as IGE-QR (RG-k, xent) for the random grouping technique and IGE-QR (PCA-k, xent) for the PCA-based technique. In this section, we summarize the obtained using Gradient Estimation attacks with query reduction. Gradient estimation with query reduction maintains high attack success rates: For both datasets, the Gradient Estimation attack with PCA based query reduction (GE-QR (PCA-k, logit)) is effective, with performance close to that of FD-logit with k = 100 for MNIST (FIG2) and k = 400 for CIFAR-10 (FIG2). The Iterative Gradient Estimation attacks with both Random Grouping and PCA based query reduction (IGE-QR (RG-k, logit) and IGE-QR (PCA-k, logit)) achieve close to 100% success rates for untargeted attacks and above 80% for targeted attacks on Model A on MNIST Table 2: Comparison of untargeted query-based black-box attack methods. All are for attacks using the first 1000 samples from the MNIST dataset on Model A and with an L ∞ constraint of 0.3. The logit loss is used for all methods expect PSO, which uses the class probabilities.and Resnet-32 on CIFAR-10 (FIG3). FIG3 clearly shows the effectiveness of the gradient estimation attack across models, datasets, and adversarial goals. While random grouping is not as effective as the PCA based method for Single-step attacks, it is as effective for iterative attacks. Thus, powerful black-box attacks can be carried out purely using query access. We experimented with Particle Swarm Optimization (PSO), 3 a commonly used evolutionary optimization strategy, to construct adversarial samples as was done by BID14, but found it to be prohibitively slow for a large dataset, and it was unable to achieve high adversarial success rates even on the MNIST dataset. We also tried to use the Simultaneous Perturbation Stochastic Approximation (SPSA) method, which is similar to the method of Finite Differences, but it estimates the gradient of the loss along a random direction r at each step, instead of along the canonical basis vectors. While each step of SPSA only requires 2 queries to the target model, a large number of steps are nevertheless required to generate adversarial samples. A single step of SPSA does not reliably produce adversarial samples. The two main disadvantages of this method are that i) the convergence of SPSA is much more sensitive in practice to the choice of both δ (gradient estimation step size) and α (loss minimization step size), and ii) even with the same number of queries as the Gradient Estimation attacks, the attack success rate is lower even though the distortion is higher. A comparative evaluation of all the query-based black-box attacks we experimented with for the MNIST dataset is given in Table 2. The PSO based attack uses class probabilities to define the loss function, as it was found to work better than the logit loss in our experiments. The attack that achieves the best trade-off between speed and attack success is IGE-QR (RG-k, logit).Detailed evaluation are contained in Appendix I. In particular, discussions of the on baseline attacks (Appendix I.2), effect of dimension on query reduced Gradient Estimation attacks (Appendix I.4), Single-step attacks on defenses (Appendix I.5), and the efficiency of Gradient Estimation attacks (Appendix I.6) are provided. Sample adversarial examples are shown in Appendix H. In this section, we evaluate black-box attacks against different defenses based on adversarial training and its variants. Details about the adversarially trained models can be found in Appendix B. We focus on adversarial training based defenses as they aim to directly improve the robustness of DNNs, and are among the most effective defenses demonstrated so far in the literature. We also conduct real-world attacks on models deployed by Clarifai, a MlaaS provider. In the discussion of our , we focus on the attack success rate obtained by Iterative Gradient Estimation attacks, since they perform much better than any single-step black-box attack. Nevertheless, in Figure 6 and Appendix I.5, we show that with the addition of an initial random perturbation to overcome "gradient masking" BID21, the Gradient Estimation attack with Finite Differences is the most effective single-step black-box attack on adversarially trained models on MNIST. We train variants of Model A with the 3 adversarial training strategies described in Appendix B using adversarial samples based on an L ∞ constraint of 0.3. Model A adv-0.3 is trained with FGS samples, while Model A adv-iter-0.3 is trained with iterative FGS samples using t = 40 and α = 0.01. For the model with ensemble training, Model A adv-ens-0.3 is trained with pre-generated FGS samples for Models A, C, and D, as well as FGS samples. The source of the samples is chosen randomly for each minibatch during training. While single-step black-box attacks are less effective at lower than the one used for training, our experiments show that iterative black-box attacks continue to work well even against adversarially trained networks. For example, the Iterative Gradient Estimation attack using Finite Differences with a logit loss (IFD-logit) achieves an adversarial success rate of 96.4% against Model A adv-ens-0.3, while the best transferability attack has a success rate of 4.9%. It is comparable to the white-box attack success rate of 93% from Table 10. However, Model A adv-iter-0.3 is quite robust even against iterative attacks, with the highest black-box attack success rate achieved being 14.5%.Further, in FIG3, we can see that using just 4000 queries per sample, the Iterative Gradient Estimation attack using PCA for query reduction (IGE-QR (PCA-400, logit)) achieves 100% (untargeted) and 74.5% (targeted) adversarial success rates against Model A adv-0.3. Our methods far outperform the other black-box attacks, as shown in Table 10. We train variants of Resnet-32 using adversarial samples with an L ∞ constraint of 8. Resnet-32 adv-8 is trained with FGS samples with the same constraint, and Resnet-32 ens-adv-8 is trained with pre-generated FGS samples from Resnet-32 and Std.-CNN as well as FGS samples. Resnet-32 adv-iter-8 is trained with iterative FGS samples using t = 10 and α = 1.0. Iterative black-box attacks perform well against adversarially trained models for CIFAR-10 as well. IFD-logit achieves attack success rates of 100% against both Resnet-32 adv-8 and Resnet-32 adv-ens-8 (Table 3), which reduces slightly to 97% when IFD-QR (PCA-400, logit) is used. This matches the performance of white-box attacks as given in Table 10. IFD-QR (PCA-400, logit) also achieves a 72% success rate for targeted attacks at = 8 as shown in FIG3.The iteratively trained model has poor performance on both benign as well as adversarial samples. Resnet-32 adv-iter-8 has an accuracy of only 79.1% on benign data, as shown in TAB6. The Iterative Gradient Estimation attack using Finite Differences with cross-entropy loss (IFD-xent) achieves an untargeted attack success rate of 55% on this model, which is lower than on the other adversarially trained models, but still significant. This is in line with the observation by BID9 Table 3: Untargeted black-box attacks for models with adversarial training: adversarial success rates and average distortion ∆(X, X adv) over the samples. Above: MNIST, = 0.3. Below: CIFAR-10, = 8.Summary. Both single-step and iterative variants of the Gradient Estimation attacks outperform other black-box attacks on both the MNIST and CIFAR-10 datasets, achieving attack success rates close to those of white-box attacks even on adversarially trained models, as can be seen in Table 3 and FIG3. Since the only requirement for carrying out the Gradient Estimation based attacks is query-based access to the target model, a number of deployed public systems that provide classification as a service can be used to evaluate our methods. We choose Clarifai, as it has a number of models trained to classify image datasets for a variety of practical applications, and it provides black-box access to its models and returns confidence scores upon querying. In particular, Clarifai has models used for the detection of Not Safe For Work (NSFW) content, as well as for Content Moderation. These are important applications where the presence of adversarial samples presents a real danger: an attacker, using query access to the model, could generate an adversarial sample which will no longer be classified as inappropriate. For example, an adversary could upload violent images, adversarially modified, such that they are marked incorrectly as'safe' by the Content Moderation model. We evaluate our attack using the Gradient Estimation method on the Clarifai NSFW and Content Moderation models. When we query the API with an image, it returns the confidence scores associated with each category, with the confidence scores summing to 1. We use the random grouping technique in order to reduce the number of queries and take the logarithm of the confidence scores in order to use the logit loss. A large number of successful attack images can be found at https: //www.dropbox.com/s/xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0. Due to their possibly offensive nature, they are not included in the paper. An example of an attack on the Content Moderation API is given in FIG0, where the original image on the left is clearly of some kind of drug on a table, with a spoon and a syringe. It is classified as a drug by the Content Moderation model with a confidence score of 0.99. The image on the right is an adversarial image generated with 192 queries to the Content Moderation API, with an L ∞ constraint on the perturbation of = 32. While the image can still clearly be classified by a human as being of drugs on a table, the Content Moderation model now classifies it as'safe' with a confidence score of 0.96.Remarks. The proposed Gradient Estimation attacks can successfully generate adversarial examples that are misclassified by a real-world system hosted by Clarifai without prior knowledge of the training set or model. Overall, in this paper, we conduct a systematic analysis of new and existing black-box attacks on state-of-the-art classifiers and defenses. We propose Gradient Estimation attacks which achieve high attack success rates comparable with even white-box attacks and outperform other state-of-the-art black-box attacks. We apply random grouping and PCA based methods to reduce the number of queries required to a small constant and demonstrate the effectiveness of the Gradient Estimation attack even in this setting. We also apply our black-box attack against a real-world classifier and state-of-the-art defenses. All of our show that Gradient Estimation attacks are extremely effective in a variety of settings, making the development of better defenses against black-box attacks an urgent task. Stephen In this section, we describe existing methods for generating adversarial examples. An adversary can generate adversarial example x adv from a benign sample x by adding an appropriate perturbation of small magnitude BID18. Such an adversarial example x adv will either cause the classifier to misclassify it into a targeted class (targeted attack), or any class other than the ground truth class (untargeted attack). Now, we describe two baseline black-box attacks which can be carried out without any knowledge of or query access to the target model. Random perturbations. With no knowledge of f or the training set, the simplest manner in which an adversary may seek to carry out an attack is by adding a random perturbation to the input BID18 BID0 BID6. These perturbations can be generated by any distribution of the adversary's choice and constrained according to an appropriate norm. If we let P be a distribution over X, and p is a random variable drawn according to P, then a noisy sample is just x noise = x + p. Since random noise is added, it is not possible to generate targeted adversarial samples in a principled manner. This attack is denoted as Rand. throughout. A perturbation aligned with the difference of means of two classes is likely to be effective for an adversary hoping to cause misclassification for a broad range of classifiers BID22. While these perturbations are far from optimal for DNNs, they provide a useful baseline to compare against. Adversaries with at least partial access to the training or test sets can carry out this attack. An adversarial sample generated using this method, and with L ∞ constraints, is x adv = x + · sign(µ t − µ o), where µ t is the mean of the target class and µ o is the mean of the original ground truth class. For an untargeted attack, t = argmin i d(µ i − µ o), where d(·, ·) is an appropriately chosen distance function. In other words, the class whose mean is closest to the original class in terms of the Euclidean distance is chosen to be the target. This attack is denoted as D. of M. throughout. Now, we describe two white-box attack methods, used in transferability-based attacks, for which we constructed approximate, gradient-free versions in Section 3. These attacks are based on either iterative or single-step gradient based minimization of appropriately defined loss functions of neural networks. Since these methods all require the knowledge of the model's gradient, we assume the adversary has access to a local model f s. Adversarial samples generated for f s can then be transferred to the target model f t to carry out a transferability-based attack BID12 BID7 ). An ensemble of local models BID5 may also be used. Transferability-based attacks are described in Appendix A.2.The single-step Fast Gradient method, first introduced by BID0, utilizes a firstorder approximation of the loss function in order to construct adversarial samples for the adversary's surrogate local model f s. The samples are constructed by performing a single step of gradient ascent for untargeted attacks. Formally, the adversary generates samples x adv with L ∞ constraints (known as the Fast Gradient Sign (FGS) method) in the untargeted attack setting as DISPLAYFORM0 where f s (x, y) is the loss function with respect to which the gradient is taken. The loss function typically used is the cross-entropy loss BID12.Iterative Fast Gradient methods are simply multi-step variants of the Fast Gradient method described above , where the gradient of the loss is added to the sample for t + 1 iterations, starting from the benign sample, and the updated sample is projected to satisfy the constraints H in every step: DISPLAYFORM1 with x 0 adv = x. Iterative fast gradient methods thus essentially carry out projected gradient descent (PGD) with the goal of maximizing the loss, as pointed out by BID9. Here we describe black-box attacks that assume the adversary has access to a representative set of training data in order to train a local model. One of the earliest observations with regards to adversarial samples for neural networks was that they transfer; i.e, adversarial attack samples generated for one network are also adversarial for another network. This observation directly led to the proposal of a black-box attack where an adversary would generate samples for a local network and transfer these to the target model, which is referred to as a Transferability based attack. Transferability attack (single local model). These attacks use a surrogate local model f s to craft adversarial samples, which are then submitted to f in order to cause misclassification. Most existing black-box attacks are based on transferability from a single local model BID12 BID7. The different attack strategies to generate adversarial instances introduced in Section A.1 can be used here to generate adversarial instances against f s, so as to attack f. s is best suited for generating adversarial samples that transfer well to the target model f, BID5 propose the generation of adversarial examples for an ensemble of local models. This method modifies each of the existing transferability attacks by substituting a sum over the loss functions in place of the loss from a single local model. Concretely, let the ensemble of m local models to be used to generate the local loss be {f s1, . . ., f sm}. The ensemble loss is then computed as ens (x, y) = m i=1 α i f s i (x, y), where α i is the weight given to each model in the ensemble. The FGS attack in the ensemble setting then becomes x adv = x + · sign(∇ x ens (x, y)). The Iterative FGS attack is modified similarly. BID5 show that the Transferability attack (local model ensemble) performs well even in the targeted attack case, while Transferability attack (single local model) is usually only effective for untargeted attacks. The intuition is that while one model's gradient may not be adversarial for a target model, it is likely that at least one of the gradient directions from the ensemble represents a direction that is somewhat adversarial for the target model. BID18 and BID0 introduced the concept of adversarial training, where the standard loss function for a neural network f is modified as follows: where y is the true label of the sample x. The underlying objective of this modification is to make the neural networks more robust by penalizing it during training to count for adversarial samples. During training, the adversarial samples are computed with respect to the current state of the network using an appropriate method such as FGSM.Ensemble adversarial training. BID21 proposed an extension of the adversarial training paradigm which is called ensemble adversarial training. As the name suggests, in ensemble adversarial training, the network is trained with adversarial samples from multiple networks. Iterative adversarial training. A further modification of the adversarial training paradigm proposes training with adversarial samples generated using iterative methods such as the iterative FGSM attack described earlier BID9. MNIST. This is a dataset of images of handwritten digits . There are 60,000 training examples and 10,000 test examples. Each image belongs to a single class from 0 to 9. The images have a dimension d of 28 × 28 pixels (total of 784) and are grayscale. Each pixel value lies in. The digits are size-normalized and centered. This dataset is used commonly as a'sanity-check' or first-level benchmark for state-of-the-art classifiers. We use this dataset since it has been extensively studied from the attack perspective by previous work. CIFAR-10. This is a dataset of color images from 10 classes (In this section, we present the architectures and training details for both the normally and adversarially trained variants of the models on both the MNIST and CIFAR-10 datasets. The accuracy of each model on benign data is given in TAB6 . Models A and C have both convolutional layers as well as fully connected layers. They also have the same order of magnitude of parameters. Model B, on the other hand, does not have fully connected layers and has an order of magnitude fewer parameters. Similarly, Model D has no convolutional layers and has fewer parameters than all the other models. Models A, B, and C all achieve greater than 99% classification accuracy on the test data. Model D achieves 97.2% classification accuracy, due to the lack of convolutional layers. For all adversarially trained models, each training batch contains 128 samples of which 64 are benign and 64 are adversarial samples (either FGSM or iterative FGSM). This implies that the loss for each is weighted equally during training; i.e., in Eq. 9, α is set to 0.5. For ensemble adversarial training, the source of the FGSM samples is chosen randomly for each training batch. Networks using standard and ensemble adversarial training are trained for 12 epochs, while those using iterative adversarial training are trained for 64 epochs. In particular, Resnet-32 is a standard 32 layer ResNet with no width expansion, and Resnet-28-10 is a wide ResNet with 28 layers with the width set to 10, based on the best performing ResNet from Zagoruyko & Komodakis (TensorFlow Authors, a). The width indicates the multiplicative factor by which the number of filters in each residual layer is increased. Std.-CNN is a CNN with two convolutional layers, each followed by a max-pooling and normalization layer and two fully connected layers, each of which has weight decay. For each model architecture, we train 3 models, one on only the CIFAR-10 training data, one using standard adversarial training and one using ensemble adversarial training. Resnet-32 is trained for 125,000 steps, Resnet-28-10 is trained for 167,000 steps and Std.-CNN is trained for 100,000 steps on the benign training data. Models Resnet-32 and Resnet-28-10 are much more accurate than Std.-CNN. The adversarial variants of Resnet-32 is trained for 80,000 steps. All models were trained with a batch size of 128.The two ResNets achieve close to state-of-the-art accuracy ima on the CIFAR-10 test set, with Resnet-32 at 92.4% and Resnet-28-10 at 94.4%. Std.-CNN, on the other hand, only achieves an accuracy of 81.4%, reflecting its simple architecture and the complexity of the task. TAB6: Accuracy of models on the benign test data C.3 ALTERNATIVE ADVERSARIAL SUCCESS METRIC Note that the adversarial success rate can also be computed by considering only the fraction of inputs that meet the adversary's objective given that the original sample was correctly classified. That is, one would count the fraction of correctly classified inputs (i.e. f (x) = y) for which f (x adv) = y in the untargeted case, and f t (x adv) = T in the targeted case. In a sense, this fraction represents those samples which are truly adversarial, since they are misclassified solely due to the adversarial perturbation added and not due to the classifier's failure to generalize well. In practice, both these methods of measuring the adversarial success rate lead to similar for classifiers with high accuracy on the test data. Here, we provide a unified framework assuming an adversary can make active queries to the model. Existing attacks making zero queries are a special case in this framework. Given an input instance x, the adversary makes a sequence of queries based on the adversarial constraint set H, and iteratively adds perturbations until the desired query are obtained, using which the corresponding adversarial example x adv is generated. We formally define the targeted and untargeted black-box attacks based on the framework as below. Definition 1 (Untargeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as DISPLAYFORM0, where q i f denotes the ith corresponding query on x i, and DISPLAYFORM1, where k is the number of queries made. Definition 2 (Targeted black-box attack). Given an input instance x and an iterative active query attack strategy A, a query sequence can be generated as DISPLAYFORM2, where q i f denotes the ith corresponding query on x i, and we set x 1 = x. A black-box attack on f (·; θ) is targeted if the adversarial example x adv = x k satisfies f (x adv ; θ) = T, where T and k are the target class and the number of queries made, respectively. The case where the adversary makes no queries to the target classifier is a special case we refer to as a zero-query attack. In the literature, a number of these zero-query attacks have been carried out with varying degrees of success BID12 BID5 BID7 BID8 ). The expressions for targeted white-box and Gradient Estimation attacks are given in this section. Targeted transferability attacks are carried out using locally generated targeted white-box adversarial Table 6: Targeted black-box attacks: adversarial success rates. The number in parentheses for each entry is ∆(X, X adv), the average distortion over all samples used in the attack. Above: MNIST, = 0.3. Below: CIFAR-10, = 8.samples. Adversarial samples generated using the targeted FGS attack are DISPLAYFORM0 where T is the target class. Similarly, the adversarial samples generated using iterative FGS are DISPLAYFORM1 For the logit based loss, targeted adversarial samples are generated using the following loss term: DISPLAYFORM2 Targeted black-box adversarial samples generated using the Gradient Estimation method are then DISPLAYFORM3 Similarly, in the case of a black-box adversary with query-access to the logits, the adversarial sample is DISPLAYFORM4 F GRADIENT ESTIMATION WITH QUERY REDUCTION This section contains the detailed algorithm for query reduction using random grouping. Algorithm 1 Gradient estimation with query reduction using random features DISPLAYFORM0 Choose a set of random k indices Si out of [1, . DISPLAYFORM1, which is the two-sided approximation of the directional derivative along v 6: end for DISPLAYFORM2 Concretely, let the samples the adversary wants to misclassify be column vectors x i ∈ R d for i ∈ {1, . . ., n} and let X be the d × n matrix of centered data samples (i.e. X = [x 1x2 . . .x n], wherẽ DISPLAYFORM3. The principal components of X are the normalized eigenvectors of its sample DISPLAYFORM4 Since C is a positive semidefinite matrix, there is a decomposition C = UΛU T where U is an orthogonal matrix, Λ = diag(λ 1, . . ., λ d), and λ 1 ≥... ≥ λ d ≥ 0. Thus, U in Algorithm 2 is the d × d matrix whose columns are unit eigenvectors of C. The eigenvalue λ i is the variance of X along the i th component. Further, PCA minimizes reconstruction error in terms of the L 2 norm; i.e., it provides a basis in which the Euclidean distance to the original sample from a sample reconstructed using a subset of the basis vectors is the smallest. Algorithm 2 Gradient estimation with query reduction using PCA components Input: DISPLAYFORM5 Initialize v such that v = ui ui, where u i is the i th column of U 3: DISPLAYFORM6 which is the two-sided approximation of the directional derivative along v 4: DISPLAYFORM7 Taxonomy of black-box attacks: To deepen our understanding of the effectiveness of black-box attacks, in this work, we propose a taxonomy of black-box attacks, intuitively based on the number of queries on the target model used in the attack. The details are provided in TAB8.We evaluate the following attacks summarized in In FIG9, we show some examples of successful untargeted adversarial samples against Model A on MNIST and Resnet-32 on CIFAR-10. These images were generated with an L ∞ constraint of = 0.3 for MNIST and = 8 for CIFAR-10. Clearly, the amount of perturbation added by iterative attacks is much smaller, barely being visible in the images. In this section, we present the white-box attack for various cases in Tables 8-10. Where relevant, our match previous work BID0 ). In the baseline attacks described in Appendix A.1.1, the choice of distribution for the random perturbation attack and the choice of distance function for the difference of means attack are not fixed. Here, we describe the choices we make for both attacks. The random perturbation p for each sample (for both MNIST and CIFAR-10) is chosen independently according to a multivariate normal distribution with mean 0, i.e. p ∼ N (0, I d). Then, depending on the norm constraint, either a signed and scaled version of the random perturbation (L ∞) or a scaled unit vector in the direction of the perturbation (L 2) is added. For an untargeted attack utilizing perturbations aligned with the difference of means, for each sample, the mean of the class closest to the original class in the L 2 distance is determined. All attacks use the logit loss. Perturbations in the images generated using single-step attacks are far smaller than those for iterative attacks. The'7' from MNIST is classified as a'3' by all single-step attacks and as a'9' by all iterative attacks. The dog from CIFAR-10 is classified as a bird by the white-box FGS and Finite Difference attack, and as a frog by the Gradient Estimation attack with query reduction. White-box Table 8: Untargeted white-box attacks: adversarial success rates and average distortion ∆(X, X adv) over the test set. Above: MNIST, = 0.3. Below: CIFAR-10, = 8.As expected, adversarial samples generated using Rand. do not achieve high adversarial success rates in spite of having similar or larger average distortion than the other black-box attacks for both the MNIST and CIFAR-10 models. However, the D. of M. method is quite effective at higher perturbation values for the MNIST dataset as can be seen in FIG2. Also, for Models B and D, the D. of M. attack is more effective than FD-xent. The D. of M. method is less effective in the targeted attack case, but for Model D, it outperforms the transferability based attack considerably. Its success rate is comparable to the targeted transferability based attack for Model A as well. The relative effectiveness of the two baseline methods is reversed for the CIFAR-10 dataset, however, where Rand. outperforms D. of M. considerably as is increased. This indicates that the models trained on MNIST have normal vectors to decision boundaries which are more aligned with the vectors along the difference of means as compared to the models on CIFAR-10. For the transferability experiments, we choose to transfer from Model B for MNIST dataset and from Resnet-28-10 for CIFAR-10 dataset, as these models are each similar to at least one of the Table 10: Untargeted white-box attacks for models with adversarial training: adversarial success rates and average distortion ∆(X, X adv) over the test set. Above: MNIST, = 0.3. Below: CIFAR-10, = 8.other models for their respective dataset and different from one of the others. They are also fairly representative instances of DNNs used in practice. Adversarial samples generated using single-step methods and transferred from Model B to the other models have higher success rates for untargeted attacks when they are generated using the logit loss as compared to the cross entropy loss as can be seen in Table 1. For iterative adversarial samples, however, the untargeted attack success rates are roughly the same for both loss functions. As has been observed before, the adversarial success rate for targeted attacks with transferability is much lower than the untargeted case, even when iteratively generated samples are used. For example, the highest targeted transferability rate in Table 6 is 54.5%, compared to 100.0% achieved by IFD-xent-T across models. One attempt to improve the transferability rate is to use an ensemble of local models, instead of a single one. The for this on the MNIST data are presented in TAB7. In general, both untargeted and targeted transferability increase when an ensemble is used. However, the increase is not monotonic in the number of models used in the ensemble, and we can see that the transferability rate for IFGS-xent samples falls sharply when Model D is added to the ensemble. This may be due to it having a very different architecture as compared to the models, and thus also having very different gradient directions. This highlights one of the pitfalls of transferability, where it is important to use a local surrogate model similar to the target model for achieving high attack success rates. (c) Gradient Estimation attack with query reduction using PCA components and the logit loss (GE-QR (PCA-k, logit)) on Resnet-32 (CIFAR-10). Relatively high success rates are maintained even for k = 400. We consider the effectiveness of Gradient Estimation with random grouping based query reduction and the logit loss (GE-QR (RG-k, logit)) on Model A on MNIST data in FIG13, where k is the number of indices chosen in each iteration of Algorithm 1. Thus, as k increases and the number of groups decreases, we expect adversarial success to decrease as gradients over larger groups of features are averaged. This is the effect we see in FIG13, where the adversarial success rate drops from 93% to 63% at = 0.3 as k increases from 1 to 7. Grouping with k = 7 translates to 112 queries per MNIST image, down from 784. Thus, in order to achieve high adversarial success rates with the random grouping method, larger perturbation magnitudes are needed. On the other hand, the PCA-based approach GE-QR (PCA-k, logit) is much more effective, as can be seen in FIG13. Using 100 principal components to estimate the gradient for Model A on MNIST as in Algorithm 2, the adversarial success rate at = 0.3 is 88.09%, as compared to 92.9% without any query reduction. Similarly, using 400 principal components for Resnet-32 on CIFAR-10 FIG13 ), an adversarial success rate of 66.9% can be achieved at = 8. At = 16, the adversarial success rate rises to 80.1%. In this section, we analyse the effectiveness of single-step black-box attacks on adversarially trained models and show that the Gradient Estimation attacks using Finite Differences with the addition of random perturbations outperform other black-box attacks. Evaluation of single-step attacks on model with basic adversarial training: In Figure 6a, we can see that both single-step black-box and white-box attacks have much lower adversarial success rates on Model A adv-0.3 as compared to Model A. The success rate of the Gradient Estimation attacks matches that of white-box attacks on these adversarially trained networks as well. To overcome this, we add an initial random perturbation to samples before using the Gradient Estimation attack with Finite Differences and the logit loss (FD-logit). These are then the most effective single step black-box attacks on Model A adv-0.3 at = 0.3 with an adversarial success rate of 32.2%, surpassing the Transferability attack (single local model) from B. In Figure 6b, we again see that the Gradient Estimation attacks using Finite Differences (FD-xent and FD-logit) and white-box FGS attacks (FGS-xent and FGS-logit) against Resnet-32. As is increased, the attacks that perform the best are Random Perturbations (Rand.), Difference-ofmeans (D. of M.), and Transferability attack (single local model) from Resnet-28-10 with the latter performing slightly better than the baseline attacks. This is due to the'gradient masking' phenomenon and can be overcome by adding random perturbations as for MNIST. An interesting effect is observed at = 4, where the adversarial success rate is higher than at = 8. The likely explanation for this effect is that the model has overfitted to adversarial samples at = 8. Our Gradient Estimation attack closely tracks the adversarial success rate of white-box attacks in this setting as well. Increasing effectiveness of single-step attacks using initial random perturbation: Since the Gradient Estimation attack with Finite Differences (FD-xent and FD-logit) were not performing well due the masking of gradients at the benign sample x, we added an initial random perturbation to escape this low-gradient region as in the RAND-FGSM attack BID21. Figure 7 shows the effect of adding an initial L ∞ -constrained perturbation of magnitude 0.05. With the addition of a random perturbation, FD-logit has a much improved adversarial success rate on Model A adv-0.3, going up to 32.2% from 2.8% without the perturbation at a total perturbation value of 0.3. It even outperforms the white-box FGS (FGS-logit) with the same random perturbation added. This effect is also observed for Model A adv-ens-0.3, but Model A adv-iter-0.3 appears to be resistant to single-step gradient based attacks. Thus, our attacks work well for single-step attacks on DNNs with standard and ensemble adversarial training, and achieve performance levels close to that of white-box attacks. In our evaluations, all models were run on a GPU with a batch size of 100. On Model A on MNIST data, single-step attacks FD-xent and FD-logit take 6.2 × 10 −2 and 8.8 × 10 −2 seconds per sample respectively. Thus, these attacks can be carried out on the entire MNIST test set of 10,000 images in about 10 minutes. For iterative attacks with no query reduction, with 40 iterations per sample (α set to 0.01), both IFD-xent and IFD-xent-T taking about 2.4 seconds per sample. Similarly, IFD-logit and IFD-logit-T take about 3.5 seconds per sample. With query reduction, using IGE-QR (PCA-k, logit) with k = 100 and IGE-QR (RG-k, logit) with k = 8, the time taken is just 0.5 seconds per sample. In contrast, the fastest attack from, the ZOO-ADAM attack, takes around 80 seconds per sample for MNIST, which is 24× slower than the Iterative Finite Difference attacks and around 160× slower than the Iterative Gradient Estimation attacks with query reduction. For Resnet-32 on the CIFAR-10 dataset, FD-xent, FD-xent-T, FD-logit and FD-logit-T all take roughly 3s per sample. The iterative variants of these attacks with 10 iterations (α set to 1.0) take roughly 30s per sample. Using query reduction, both IGE-QR (PCA-k, logit) with k = 100 with 10 iterations takes just 5s per sample. The time required per sample increases with the complexity of the network, which is observed even for white-box attacks. For the CIFAR-10 dataset, the fastest attack from takes about 206 seconds per sample, which is 7× slower than the Iterative Finite Difference attacks and around 40× slower than the Iterative Gradient Estimation attacks with query reduction. All the above numbers are for the case when queries are not made in parallel. Our attack algorithm allows for queries to be made in parallel as well. We find that a simple parallelization of the queries gives us a 2 − 4× speedup. The limiting factor is the fact that the model is loaded on a single GPU, which implies that the current setup is not fully optimized to take advantage of the inherently parallel nature of our attack. With further optimization, greater speedups can be achieved. Remarks: Overall, our attacks are very efficient and allow an adversary to generate a large number of adversarial samples in a short period of time.
Query-based black-box attacks on deep neural networks with adversarial success rates matching white-box attacks
998
scitldr
We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our from multiple datasets are that 1. deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and, 2. pure transfer learning works effectively with minimum interference from the user and is robust against small data. With the advent of big data, graphical representation of information has gained popularity. Being able to classify graphs has applications in many domains. We ask, "Given a small piece of a parent network, is it possible to identify the nature of the parent network (Figure1)?" We address this problem using structured image representations of graphs. Adjacency matrices are notoriously bad for machine learning. It is easy to see why, from the unstructured image of a small fragment of a road network, in figure (a) below. Though the road network is structured, the random image would convey little or no information to machine learning algorithms (in the image, a black pixel at position (i, j) corresponds to an edge between nodes i and j). Reordering the vertices (figure (b) below) gives a much more structured image for the same subgraph as in (a). Now, the potential to learn distinguishing properties of the subgraph is evident. We propose to exploit this very observation to solve a basic graph problem (see Figure 1). The datasets mentioned in Figure 1 are discussed in Section 2.4. We stress that both images are lossless representations of the same adjacency matrix. We use the structured image to classify subgraphs in two modes: (i) Deep learning models on the structured image representation as input. (ii) The structured image representation is used as input to a transfer learner (Caffe: see Section 2.3) in a pure transfer learning setting without any change to the Caffe algorithm. Caffe outputs top-k categories that best describe the image. For real world images, these Caffe-descriptions are human friendly as seen in Figure 2a. However, for network-images, Caffe Figure 2: An image of a dog and a structured image of a Facebook graph sample vs their corresponding maximally specific classification vectors returned by Caffe gives a description which doesn't really have intuitive meaning (Figure 2b). We map the Caffedescriptions to vectors. This allows us to compute similarity between network images using the similarity between Caffe description-vectors (see Section 2). The significant difference between our work and previous approaches is that we transform graph classification into image classification. We propose an image representation for the adjacency matrix. We use this representation as input to machine learning algorithms for graph classification, yielding top performance. We further show that this representation is powerful enough to serve as input to a pure transfer learner that has been trained in a completely unrelated image domain. The Adjacency Matrix Image Representation. Given a sample subgraph from a parent network, the first step is to construct the image representation. We illustrate the workflow below. Image Embedding Image Structuring We use the novel method proposed in which produces an adjacency matrix that is invariant to permutations of the vertices in the adjacency matrix. The image is simply a "picture" of this permutation-invariant adjacency matrix. Deep Learning Using the Adjacency Matrix Image Representation. We train deep image classifiers (discussed in Section 2.2) on our image representation as in Figure 3. Figure 3: Classification of structured image embeddings using Deep LearningWe compared performance with several methods, including graph kernel classifiers and classifiers based on standard topological features of the graph. Our image representation performs best. Transfer Learning Using the Adjacency Matrix Image Representation. When data is scarce or there are many missing labels, a popular option is transfer learning to leverage knowledge from some other domain. Typically the other domain is closely related to the target application. It is unusual for learning in a completely unrelated domain to be transferable to a new target domain. We show that our image representation is powerful enough that one can directly transfer learn from the real world image domain to the network domain (two completely unrelated domains). That is, our image representation provides a link between these two domains enabling classification in the graph domain to leverage the wealth of techniques available to the image domain. The image domain has mature pre-trained models based on massive data. For example, the opensource Caffe deep learning framework is a convolutional neural network trained on the ImageNet data which can recognize everyday objects like chairs, cats, dogs etc. BID11 ). We use Caffe as is. Caffe is a black box that provides a distribution over image classes which we refer to as the Caffe-classification vector. The Caffe are then mapped back into the source domain using a distance-based heuristic e.g. Jaccard distance and K-nearest neighbors as in Figure 4. Classification (Jaccard plus K-NN)Figure 4: Classification of structured images using the classification vectors obtained from Caffe Images, not graphs, are passed through the Caffe deep neural network, and as we shall show, one can get good performance from as little as 10% of the training data used in the ab initio machine learning approach. It is quite stunning that such little training data together with un-tweaked transfer learning from a completely unrelated domain can perform so well. The reason is that our image representation provides very structured images (human-recognizable) for real world networks. Though these images are not traditional images like those used in training Caffe, Caffe still maps the different structured images to different distributions over its known classes, hence we are able to transfer this knowledge from Caffe to graph classification. introduced the problem we study: Can one identify the parent network from a small subgraph? How much does local information reveal about the parent graph at the global level? We approach the problem from the supervised setting and the unsupervised transfer learning setting. There is previous work on similar problems using graph kernels BID9; BID10; BID12 ). Such methods use kernels to compute similarities between graphs and then algorithms like SVM for classification. Choosing kernels is not straightforward and is certainly not a one-size-fits-all process. Further, these kernel methods do not scale well for very large graphs. We compare with one such method proposed by BID21.Another approach is to constuct feature vectors from topological attributes of the subgraph BID17 ). The topological characteristics of social networks have been extensively studied by BID0 BID3 BID22. The shortcomings of this approach are that it is difficult to come up with a master set of features that can be used to represent graphs from different domains. For example, assortativity could be an important feature in social networks while being of little significance in case of road networks. It is hard to identify beforehand what features need to be computed for a given problem, thus leading to a trial and error scenario. One of the methods we compare is logistic regression. Transfer learning is useful when a classification task in one domain can leverage knowledge learned in a related domain (see BID18 for an extensive survey). BID20 introduced a method called self-taught learning which takes advantage of irrelevant unlabeled data to boost performance. BID30 discuss heterogeneous transfer learning where they use information from text data to improve image classification performance. BID19 create a new representation from kernel distances to large unlabeled data points before performing image classification using a small subset of reference prototypes. The rest of the paper is organized as follows. In Section 2 we give more details of our approaches to subgraph classification. Section 3 present the comparing the performance of our approach with other approaches. We conclude in Section 4 with some possible future directions. Given fragments of a large network we first obtain the structured image representation. In the transfer setting, we pass these training samples to the classifiers and record the . In the unsupervised setting, we feed the training samples to Caffe framework and obtain the label-vectors. To classify a test subgraph, we first obtain its label-vector through Caffe, compute the distance between the test vector and the training vectors, and classify using majority class of the nearest−k vectors. We explain each step in detail below. An adjacency matrix of a graph can be thought of as a monochrome image with 1s corresponding to dark pixels and 0s corresponding to white pixels. This observation, however, is of limited practical use since it is not permutation invariant. We utilize a novel technique for producing a permutationinvariant ordering of the adjacency matrix, first given in. The authors describe several ways to sort an adjacency matrix like page rank, degree based sorting, etc. They show that the BFS-like approach introduced works best. The ordering starts with the node of highest degree (ties are broken using k-neighborhood size for k = 2, then k = 3, . . .). Subsequently, the ordering proceeds based on a combination of shortest paths and degrees. Details can be found in. The ordering scheme in permutation-invariant adjacency matrices from which we obtain structured images. We observe that the image embeddings have enough structure so that even the human eye can distinguish between different networks without much effort. This is the intuition behind our approach. Neural networks are highly successful in recognizing real world objects that have high structural properties. For example, all dogs have similar features although they are individually different. Our primary objective is to use deep learning to learn to classify subgraphs to their parent networks using our image representation of the subgraph. However, we also tested a wide variety of other methods as outlined in the table below (more details are in the Appendix, including the references). All these methods are tested in a standard supervised learning framework where n training examples (x i, y i) are given (the input x i and y i the target label). For DBN, CNN and SdA, the input x i is our image representation of the subgraph. For DCNN and GK, the input x i is the graph itself, represented as an adjacency matrix. For LR, the input x i is a set of 15 classical features (assortativity, clustering coefficient, etc.). Caffe is a deep learning framework developed by BID8, Berkeley AI Research and by community contributors with expressive and modular architecture in mind. It has been extensively used in image classification and filter visualization, learning handwritten digital data, and style recognition among other things as seen in. We use a pre-trained model that is trained on a crowd-sourced labeled data set ImageNet. As of 2016, ImageNet had more than 10 million hand-annotated images. The massive volume combined with a deep convolutional neural network gives us fine-grained discriminatory power for images. Given an image, the output of our Caffe-based image classification function is a vector of (label, label-probability) tuples, sorted in decreasing order of probabilities. An example of the output for a real image is shown in Figure 2a. Although Caffe has not been trained on image embeddings of graphs, such images nonetheless produce vectors that have sufficient discriminatory information that we extract using the post processing step (see Section 2.3.1). An example of a vector corresponding to a Facebook network sample is shown in Figure 2b.Caffe provides either maximally accurate or maximally specific classification. We use the maximally specific categorization option in Caffe for our work. Further, while we have shown cardinality-5 vectors for brevity in Figure 2, we use cardinality-10 vectors in our experiments. Caffe provides a set of label vectors L i for each training network x i. Each label vector is a tuple of (label, label-probability pairs) as deemed by Caffe. In this work, we ignore the probabilities, and treat each vector as an unordered list of labels (strings). Each training vector also has a ground truth parent label. We use Jaccard similarity to compute a similarity metric between two label-vectors L j and L k: DISPLAYFORM0 We leave for future work the use of more sophisticated metrics which could use the probabilities from Caffe -our goal is to demonstrate the potential of even this simplest possible approach. For a test graph, we get the label-vector T from Caffe and then compute the k nearest training vectors using the Jaccard distance d(L i, T) to each training vector L i and classify using the majority class C among these k nearest training vectors (ties are broken randomly). The test example is correctly classified if and only if its ground truth matches C. One advantage of the k nearest neighbor approach is that it seamlessly extends to an arbitrary number of parent network classes. We used a variety of datasets ranging from citation networks to social networks to e-commerce networks (see TAB3 and the brief descriptions in the Appendix). Our deep networks learn signatures from the image representation of each graph class. To gein some insight into these network signatures, we show the top principal component of the images of each network. This process is very similar to the one carried out in BID24. We grouped all the samples into their respective categories. Then, we vectorized each sample; i.e., we reshaped each sample from n × n to 1 × n 2. We then performed principal component analysis on the vectorized dataset. We show the top principal component for each network in FIG2.In FIG2, we show the structured image representations of sample 64-node subgraphs from each of the 9 datasets. These are adjacency matrices that have gone through the structuring process described in Section 2.1. Observe that the images for different graphs are well structured and quite different. This is why the deep networks are able to perform well at classifying the subgraphs, as we will see later in Section 3. Our approach leverages the structure as well as the distinctness of the image representations. DISPLAYFORM0 In this section we will describe our experimental setup and present the we obtained from the two approaches we have described in the earlier sections. We perform the graph classification task using the above mentioned 9 parent networks. We perform a random walk on each of these networks 5, 000 times and we stop when we get the required number of nodes per sample denoted by n. We carry out this exercise 4 times and set n to 8, 16, 32 and 64 respectively. So, with 9 networks and 5, 000 samples per network, we create 4 datasets with 45, 000 samples each. Each dataset is of the size 45, 000 × n × n. For a given dataset, we randomly chose 33% of the dataset for validation and set aside 33% of the dataset for testing. The accuracy score is defined as the ratio of the sum of the principal diagonal entries of the confusion matrix over the sum of all the entries of the confusion matrix (sometimes called the error matrix). A confusion matrix C is such that C i,j is equal to the number of observations known to be in class i but predicted (confused) to be in class j. We report the best accuracy score for each classifier in the following CNN was the best performing classifier while DCNN and GK were the poorest performers. This shows that off-the-shelf deep learning with graph image features with no tuning of the classifier is better than any other method we tested. FIG3 summarizes the performance of all the methods we tested concisely. As expected, we obtained higher accuracy in classification as the number of nodes per sample n increased. Note that straight line in the figure refers to the accuracy achieved for random guessing of the classes. We would like to point out the fact that even with only 8 nodes, we were able to do significantly better than random while being better than graph kernel methods and the feature based logistic regression classifier. We would like to note that although LR performs okay, it is very hard to choose the features when graphs come from different domains. One set of features that worked best in one scenario may not be the best in another. So, when using an untuned CNN that requires very little effort from the user handily outperforms these cumbersome methods, it is an easy choice to make. This is one of the biggest observations of our study. We also mixed the samples with different n's to create hybrid datasets. We observed that the performance of the classification was better when the mixture had more samples with higher n. This is inline with our expectation and the in FIG3. Interested readers can refer to the Appendix for more details. Graph kernel and feature-based methods performed better than DCNN, but not as well as the image embedding based methods. Kernel methods are complex and usually slow and the fact that they have performed poorly do not make them attractive. One may notice that the accuracy scores for LR are very comparable to SdA. However, we would like to remark that LR is a lossy method since it approximates the graph and boils it down to a handful of features. It is very hard to decide which features must be used and the choice may vary for graphs in different domains in order to get optimal . However, our approach is completely lossless. The structured image representation of the graph has every bit of the information the adjacency matrix does. So, we would not have to make any compromises to get the best out of the data. We would like to make a note about DCNN. Out of the 4 neural network classification models we have used in this work, DCNN is the only one that takes a graph as an input instead of images. In fact, it takes two inputs: an adjacency matrix and a design matrix. The design matrix contains information about each node in the adjacency matrix. For example, information like average degree, clustering co-efficient etc. can be provided in the design matrix. In order to make the comparison between DCNN and the other classification models as fair as possible, we specified the values of the pixels in the image embedding in our design matrices throughout our experiments. When other information (assortativity, centrality etc.) were provided we observed no significant increase in performance. This is because these properties can be calculated from the graph which is already an input. The neural network is expected to have learned these features already. In this section we present our experimental of our second approach to graph classification: transfer learning. We show that our transfer learning approach is highly resilient to sparse training data. We achieve a respectable accuracy even when only 10% of the data was used for training. Caffe can be treated as a black-box that requires very little interference from the user. This is significant because when one does not have access to ample data to train their own neural networks, transfer learning can be a very quick and effective fix to get the job done. These are the other two big observations to take home from our work. We also carried out the following experiments: classification. The detailed are relegated to the Appendix, but we note that accuracy scores were in the high 90s and 80s for (a) and around the 60s for (b). While the multi-class scores are not as glamorous as those presented in Section 3.1, they are still worth mentioning. FIG4 plots the accuracy numbers from the (a) set of experiments as we progressively increase the proportion of data used for training. Each point on the x-axis shows the percentage of available data that was used for training, the reminder used for testing. Note that the reduction of training percentage hardly impacts accuracy except for a slight dip when only 10% is used for training. Most learning techniques, especially deep neural networks are sensitive to training data volume. The relative insensitivity of our approach is likely due to the fact that we leverage pre-trained recognition engine in the image domain which has already been trained with a massive volume of images. This shows that the transfer learning approach is very robust and resilient to sparse training data. Finally, we study the impact of k, the neighborhood size for the majority rule, on accuracy. We show the detailed analysis in the Appendix but the take away fact is that as long as k > 15, we do not have to worry too much about tuning k, showing once again that the approach is robust. Our experiments overwhelmingly show that the structured image representation of graphs achieves successful graph classification with ease. The image representation is lossless, that is the image embeddings contain all the information in the corresponding adjacency matrix. Our also show that even with very little information about the parent network, Deep network models are able to extract network signatures. Specifically, with just 64-node samples from networks with up to 1 million nodes, we were able to predict the parent network with > 90% accuracy while being significantly better than random with only 8-node samples. Further, we demonstrated that the image embedding approach provides many advantages over graph kernel and feature-based methods. We also presented an approach to graph classification using transfer learning from a completely different domain. Our approach converts graphs into 2D image embeddings and uses a pre-trained image classifier (Caffe) to obtain label-vectors. In a range of experiments with real-world data sets, we have obtained accuracies from 70% to 94% for 2-way classification and 61% for multi-way classification. Further, our approach is highly resilient to training-to-test ratio, that is, can work with sparse training samples. Our show that such an approach is very promising, especially for applications where training data is not readily available (e.g. terrorist networks).Future work includes improvements to the transfer learning by improving the distance function between label-vectors, as well as using the probabilities from Caffe. Further, we would also look to generalize this approach to other domains, for example classifying radio frequency map samples using transfer learning. Deep Belief Network (DBN) DBNs (see:) consist of multiple layers of unsupervised Restricted Boltzmann Machines (RBMs) where the output of each RBM is used as input to the next. BID7 ). The RBMs can be trained greedily and a supervised back-propagation step can be used for fine-tuning. Typically, DBNs have an input layer, hidden layer(s) and a final output layer. The input layer contains a node for each of the entries in the feature vector. For example, if the input is an image of size 8 × 8, then there will be 8 × 8 = 64 nodes in the input layer. The hidden layers consist of RBMs where the output of each of RBMs are used as input to the next. Finally, the output layer contains a node for each class. The probabilities of each class label is returned. The one with the highest probability is chosen as the overall classification for the given input. Convolutional Neural Network (CNN) CNNs are a category of neural networks that have been proven to be very effective in image classification tasks. Although they have been extensively used on real world images, we believe it is essential to include CNNs in our experiments. The building blocks of a CNN are convolution layers, non-linear layers such as Rectified Linear Units (ReLU), pooling layers and fully connected layers for classification. Stacked De-Noising Auto-Encoder (SdA) We implement the stacked de-noising auto-encoder (SdA) based on greedy training algorithm presented in. In a regular multi-layer deep neural network, each layer is trained to "reconstruct" the input from the previous layer. Then, the system is fine tuned by using back-propagation. In SdA, instead of the original input, a noisy input is fed to the system. This model, introduced by BID1, works on the graphs themselves rather than the image embeddings of their adjacency matrices. DCNNs provide a flexible representation of graphical data that encodes node features, edge features, and purely structural information with little preprocessing. DCNNs learn diffusion-based representations from graph-structured data which is made possible by the new diffusion-convolution operation. Graph Kernel Approach We use a graph kernel introduced in BID21 which uses a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. We use the code provided here Ghisu as-is in our experiments. Feature-based Classification We compute 15 classic features for every graph sample and use just these to perform classification using logistic regression. The said features are: transitivity, average clustering co-efficient, average node connectivity, edge connectivity, average eccentricity, diameter, average shortest path, average degree, fraction of single-degree nodes, average closeness centrality, central points, density, average neighbor degree and top two eigen values of the adjacency matrix. Citation This citation network is from Arxiv HEP-PH (high energy physics phenomenology). If a paper i cites paper j, the graph contains a directed edge from i to j. There are 34, 546 nodes with 421, 578 edges. See: BID14; BID4 Facebook This social network contains "friends lists" from Facebook. There are 4039 people (nodes) and there is an undirected edge between nodes if they are friends. There are 88, 234 such edges. See: BID13 Road Network This is a road network of Pennsylvania. Intersections and endpoints are represented by nodes, and the roads connecting these intersections are represented by undirected edges. There are 1, 088, 092 nodes and 1, 541, 898 edges in this network. See: BID16 Web Nodes in this network represent web pages and directed edges represent hyperlinks between them. There are 875, 713 nodes and 5, 105, 039 edges. See: BID16 Wikipedia This is a Wikipedia hyperlink graph. A condensed version of Wikipedia was used in the collection of this dataset. There are 4, 604 articles (nodes) with 119, 882 links (edges) between them. See: BID25; BID26 Amazon This is a product co-purchase network of amazon.com. The nodes are products sold on amazon.com. There is an undirected edge between two products if they are frequently co-purchased. There are 334, 863 nodes and 925, 872 edges. See: BID15 DBLP This is a co-authorship network. It has authors for its nodes and there is an undirected edge between them if they have co-authored at least one paper. There are 317, 080 nodes and 1, 049, 866 We present the detailed for different mixtures of datasets that we experimented with in the supervised setting. Although the performance deteriorates when different n's are mixed, the relative ordering of the methods w.r.t. their performances remains the same. Figure 10 shows the variation of accuracy when k is varied in steps of 8 from k = 7 to k = 47, with the base case k = 15 used for the tabulated above included for comparison. As can be seen, except for a couple of cases with k = 7 providing a somewhat lower accuracy, the variation is within 1% of the base case. Thus, as long as k > 15, we do not have to worry too much about tuning k, showing that the approach is robust.
We convert subgraphs into structured images and classify them using 1. deep learning and 2. transfer learning (Caffe) and achieve stunning results.
999
scitldr